Monitoring the impact radius of DOGE, it looks like NIST (https://nist.gov) is about to be impacted by downsizing initiatives that target ~500 individuals, some of whom work in AI safety, security products. Wondering if other Trust practitioners in the community here are keeping a watching brief?
I've found the NIST CSF2 framework really useful for its controls approach, but fortunately there's more than one controls framework to lean into. The impact on AI safety controls frameworks has a potential knock-on effect for the work being done on Rovo, would be good to hear from some of the Rovo team on how they're monitoring for impacts.