Forgive me but this sounds like an incredibly bad idea. While I understand in principle how this could be a good thing, thinking about what might emerge from the bureaucratic depths frightens me to the core.
I think the biggest problem with this sort of thing is that it is simply unenforcable.
We can’t even reliably track and bring to heel sentient adults–a rogue AI seems like something that would scoff in the face of regulation.
When I saw the author was Andrew Tutt I skipped this.
As my mother always used to say:
If you don’t have anything nice to say, at least cite some goddamn sources so others can share a fully articulated loathing.
Your link doesn’t even suggest the author is bad. Your admission that you haven’t even read the material is worse.
For example, you could’ve cited this delightfully optimistic view of what we as developers do when debugging:
If something goes wrong, the programmer can go back through the program’s instructions to find out why the error occurred and easily correct it.
I thought this was going to be a simple recommendation, something like “We need a governing body that recommends TLS x.y, passwords salted and hashed in blah, etc. these recommendations for secure software practices, JSF guidelines, avoid raw pointers etc.”.
Then it got all sci fi on me and I lost interest.
We already have NIST. Of course, it has become clear that its recommendations need to be nonbinding because the standards process is perfectly capable of producing insecure things, and replacing a standard with another standard is very slow…
You can still blame a corporation for using a bad evolved algorithm and sharing your data with hackers. The buck still has to stop with someone. Unfortunately our current law already lets us down with standard algorithms, not enough companies pay high enough fines for data breaches so security and privacy aren’t as high up on their list of priorities as they should be.