Published by the Students of Johns Hopkins since 1896
November 23, 2024

The US government should take a more active role in AI regulation

By SHREYA TIWARI | December 5, 2023

president-joe-biden-signs-executive-orders-on-health-care-access-and-affordability

ADAM SCHULTZ / CC0

Tiwari highlights the need for more legislation and guidelines on artificial intelligence models. 

From ChatGPT to Stability’s Stable Diffusion model, artificial intelligence (AI) is becoming increasingly pervasive in all aspects of human life. The technology has a myriad of uses spanning every relevant industry, including clinical modeling, facial recognition and market analysis. 

However, issues of transparency, ethics and a lack of knowledge about the true potential of AI combat its far-reaching capabilities. The Biden administration’s recent executive order outlining the regulation of innovation in AI reflects this reality. The order is one of the most comprehensive sets of guidelines for AI and highlights a plethora of potential risks of AI from fraud to the development of weapons of mass destruction, but the national regulation only serves to act as a suggestion for technology developers. 

The U.S. must play a role in the international regulation of AI development. It must translate guidelines outlined in the executive order into extensive domestic legislation specifically geared toward AI data usage, transparency, human oversight, accuracy and cybersecurity. 

International cooperation must be the foundation for the federal government’s role in AI regulation. To that end, the best course of action is to take inspiration from the European Union (EU) AI law to write national legislation. The act not only delineates enforceable guidelines that AI technologies must follow but also classifies AI systems based on risk assessment. 

Forming a federal exploratory commission that attempts to understand and codify risk assessment of AI technologies in the U.S. will allow legislators to develop policy actions specific to each risk level. Risk assessment also avoids the threat of overblowing the risks of AI. With the development of unknown technology comes a tide of uncertainty, resulting in overinflated risk assessments. 

Many individuals connect the dots from AI to nightmare apocalyptic scenarios of fully autonomous technologies, a fear that can only be mitigated by quantifying the risk associated with AI-based technology. Models already exist to quantify metrics that determine the risk associated with AI, such as the Foundation Model Transparency Index developed by Stanford University. 

Even more important than the safety considerations associated with AI are the ethical considerations, highlighted in the Blueprint for an AI Bill of Rights, the groundwork for AI legislation in the U.S. In particular, the issue of algorithmic discrimination in newer AI technologies needs to be addressed by legislation. 

Algorithmic discrimination refers to biases in training data, model architecture and model usage that result in unjust differences in treatment or outcomes between different demographic groups. It’s a problem pervasive in AI technologies, transforming model outcomes and profoundly impacting multiple industries. AI algorithms that demonstrate such bias have previously been discontinued, as in the case of Amazon’s recruitment AI. The algorithm recognized word patterns in resumes and matched them against the company’s predominantly male engineering department, so any resume that contained the word “women’s” was penalized, resulting in a gender bias in recruiting. 

Despite being included in the Blueprint for an AI Bill of Rights, Biden’s executive order and special publications from the National Institute of Science and Technology, policy recommendations for ethically designing AI technologies are not yet clearly delineated. The U.S. should legislate ethical guidelines for data collection and model functionality, focusing on the inclusion of data from marginalized populations in the training and development stages of AI technologies to prevent algorithmic bias. 

The actions outlined above should be expanded on the international stage. Even more essential than developing domestic legislation is the creation of global standards for AI regulation. AI innovation benefits from scale: A unified approach to the development of safe and accessible AI technologies will improve data collection methods and minimize unnecessary costs. Responsible, safe AI development is only possible at a global scale if regulations are standardized across all contributing nations. To begin this process, the U.S. should work more earnestly to develop international commissions to create enforceable guidelines for AI regulation. 

The U.S. is a leading investor and innovator of AI technologies, particularly in the advent of GPT-4. Therefore, it must play just as active a role in AI governance on the international stage

Shreya Tiwari is a freshman from Austin, Texas majoring in Biomedical Engineering. 


Have a tip or story idea?
Let us know!

News-Letter Magazine
Multimedia
Hoptoberfest 2024
Leisure Interactive Food Map