This would be a good opportunity to provide a thoughtful, sane, and coherent response to voice your opinion on the future regulation policies for AI to counter the fearmongering.
How to submit a comment: https://www.regulations.gov/document/NTIA-2023-0009-0001
All electronic public comments on this action, identified by Regulations.gov docket number NTIA–2023–0009, may be submitted through the Federal e-Rulemaking Portal. The docket established for this request for comment can be found at www.Regulations.gov, NTIA–2023–0009. To make a submission, click the ‘‘Comment Now!’’ icon, complete the required fields, and enter or attach your comments. Additional instructions can be found in the “Instructions” section below, after “Supplementary Information.”
From their list of concerns:
It seems like they’re concerned about both open and closed models, and they’re interested in supporting as well as potentially regulating both.
Lol, no they aren’t. That’s just legalese. A government regulator agency isn’t going to open feedback about what it should be doing regulatory wise and promt the entire thing with “Why should we close AI to only corporations willing to post enough money”. They have to at least include the “but maybe potentially let average people use AI” part so when they close everything down they can point back and say “look, we were open to talking about open solutions”.
That tends to be the outcome of processes like this, and sometimes it is because the agency already decided on policy ahead of time and only asked for public input for the sake of appearances. But in other cases the request for input is in good faith, and industry interests end up dominating the discussion because other voices convince themselves they’d be ignored anyway.
In the case of new industries still in flux, it’s more likely that commercial interests haven’t yet infiltrated the relevant agencies to dictate policy from within—which is why they have to rely on hyperbolic scare tactics and hope no one contradicts them.