The US Senate’s AI policy roadmap is not a road to a safe future

17 May 2024

By: Ellen Jacobs and Isabelle Frances-Wright 


The US Senate has released their much-anticipated AI Policy Roadmap recommending Congress take significant steps to regulate the deployment, use and effects of generative AI technologies.  

We were encouraged to see the white paper recommends that Congress pass comprehensive federal privacy legislation and that the US work with like-minded allies to create standards that protect our democratic values. However, it is concerning that the white paper stops well short of addressing the very real harms to people and democratic processes we are already seeing and does not yet include substantive policy solutions. 

We can learn valuable lessons by looking back on the evolution of social media and seeing how companies have been able to consistently prioritize profits over safety. With the speed at which AI technologies are developing, it is critical that we pass transparency legislation now that will give society the information it needs to effectively regulate AI systems so we do not make the same mistakes again. 

From the nonconsensual use of images in deepfakes to the use of generative AI technologies to deepen distrust in our institutions, we have already seen how unregulated generative AI can be used to harm people. Despite the existence of strong bills that have been proposed to mitigate these harms, we were discouraged to see how few direct endorsements the roadmap included. 

To realize the vision that the roadmap lays out – one that protects innovation in this space and supports democratic and inclusive uses of AI – we need to address and prevent the harm we see now so that people can responsibly and safely engage with AI in the future. We applaud the Senate Rules Committee for passing three critical bills yesterday that will protect elections from harmful uses of AI. We hope to see these bills and others that seek to address harmful applications of generative AI, like the creation of nonconsensual deepfakes, move quickly through Congress.  

In particular, we are encouraged by the many mentions of transparency in the roadmap. We believe that transparency and data access for researchers are integral to identifying and effectively addressing harms and ensuring compliance. More transparency than we currently are granted is needed for society to create effective guardrails as AI technologies rapidly develop. It is crucial to ensure that the development of such significant technology remains visible and accountable to the public. Transparency into how these companies build, train and deploy their technologies is critical to being able to identify harms and propose effective solutions.

While the AI Roadmap represents a positive step forward, it is important to recognize that there remains substantial work to be done. Continuing this progress will be essential for maintaining safety as AI technologies continue to improve in sophistication.