The U.S. Must Fill Gaps in AI Policy

Loading

Artificial Intelligence developers are in a difficult position. Consumers and users are demanding rapid innovation from AI products. At the same time, many of the demanded applications have consequential outcomes, and severe results could ensue if developers are not careful. Law enforcement agencies, creditors, advertisers and employment agencies — among many other impactful groups — already integrate AI systems into their respective value chains. Therefore, it is paramount that AI Policy can protect consumers, developers and the general public from the wide range of risks that the widespread development and implementation of AI can pose.

Other highly capable and powerful technologies such as the Internet have comprehensive policy to prevent unintended consequences and misuse. The same does not exist for AI systems even with how impactful mistakes can be. In developing their AI credit evaluation algorithm for ApplePay Credit Cards, Apple inadvertently created a system giving men up to twenty times the credit limit compared to women with equal or higher credit scores. Oppressive regimes could even employ American-made facial recognition AI and digital surveillance AI systems to track and muzzle dissidents similar to the AI systems used by the Chinese government in its crackdown on Uyghur Muslims.

Many academics, including Dr. Maria Stefania Cataleta — an Italian lawyer and human rights lecturer at La Sapienza University — cite a need for the integration of ethics and values into the learning capabilities and valuation of information by AI. Cataleta raises concern around liabilities surrounding AI: “Who is the subject legally responsible for a given activity: The algorithm? The computer scientist? The user?”

In determining culpability, it can be extremely difficult to pinpoint what drives an AI system’s output. Chatbots such as ChatGPT and Google LaMDA excel at language processing. Both bots have also been marketed as alternatives to search engines and research tools even though they often produce inaccurate information. The code and inputs for each chatbot are so complex that most users would not know where to begin in deciphering a process, and neither chatbot provides a confidence index detailing the potential validity of each output. Google developers recognized LaMDA’s limited capabilities, so executives delayed the public release of LaMDA to further develop the software, maintain trust and protect consumers. At the same time, OpenAI hoped to be the first mover on a powerful AI Chatbot and showed no restraint in the face of development shortcomings. In doing so, OpenAI delivered an incomplete product which can be harmful to consumers. Although ChatGPT provides a disclaimer that outputs may not be accurate, users cannot certainly tell whether an output is inaccurate without prior knowledge on a topic.

Additionally, some AI systems users find ways to misuse systems outside of their primary function to instead disrupt, misinform, harass or victimize. Companies do have access to some federal guidance to prevent their systems from being abused, but many gaps remain in law enforcement’s ability to stop active misuse. The lack of effective enforcement methods can cause extreme harm and create mistrust between a company’s product and the general public. 

In one horrific category of AI abuse, bad actors have used face-swap or “deepfake” AI systems to non-consensually generate pornographic content using photos of anyone—what many call “image-based sexual violence.” Perpetrators are by nature difficult to track down, and local law enforcement does not have the capability or established procedures to protect victims and take down malicious content. California, Virginia, New York and Georgia are the only four states that have enacted legislation to ban non-consensual deepfake pornography, and only a few other states including New Jersey are now beginning to develop similar legislation. Victims living outside these states are left without answers or practical legal options. 

Related guidance and legislation remains limited even with the trouble surrounding the use and development of AI systems. The Federal Trade Commission uses broader-scoped legislation including the Fair Credit Reporting Act, the Equal Credit Opportunity Act and Section 5 of the the Federal Trade Commission Act — which regards, among many issues, the misleading of consumers — to address concerns and violations made by AI systems. However, all of these acts came into being at a time when AI was not nearly as developed or widespreadly used as it is today, and AI systems are not directly mentioned in any of them. In the case where Apple’s AI mistakenly gave men higher credit limits, the FTC used the Equal Credit Opportunity Act to sue Apple so they would fix their system and reconcile their mistake.

Although sometimes effective, reactive enforcement fails to provide a solution to every issue. For Apple, their AI system committed such a large numerical error that customers easily noticed something was wrong, but it may be that other AI systems make similar discriminatory mistakes at a less noticeable magnitude. Internal or External evaluation requirements would provide a more effective process to prevent AI bias at every level, especially in determining highly impactful outcomes such as someone’s credit limit.

This criticism is not meant to discount contributions towards voluntary guidance and research attempting to mitigate risks posed by AI systems. The National Institute of Science and Technology recently published a voluntary AI Risk Management Framework which aims to help companies navigate the many risks posed by AI systems and evaluate the effectiveness of individual systems and trustworthiness. Efforts by the National Science Foundation, the Department of Defense, the Department of Labor and many other federal agencies continue to advance research into risk management and sustainable development for AI. These important first steps will hopefully lead to the enactment of policy that rewards sustainable development of AI and protects consumers. Even so, research and guidance cannot substitute effective, balanced policy. The more that officials neglect policy implementation, the harder it will be to adjust the policy environment to more advanced AI systems to protect consumers and promote sustainable competition in the industry.

LEAVE A REPLY