Skip to content
Menu
menu

Illustration By Daniel Hertzberg

Stakeholders Assess Aftereffects of AI

It was an unusually cold autumn in London in 1952, and coal-fired power stations and residences were burning more coal than usual to keep businesses open and homes warm.

But all of that stopped on 4 December when an anticyclone stalled over the city, trapping and stagnating cold air under a layer of warm air. This effect created a thick fog throughout London that mixed with smoke from the coal-fires, exhaust from motor vehicles, and other pollutants to form a smog.

Severely limiting visibility, the smog made driving almost impossible, halting all public transportation save the London Underground, and prohibiting movement of ambulances. It also made thousands of people sick. In the following weeks, national health officials revealed that more than 4,000 people died due to the smog.

The five-day smog, later dubbed the Great Smog of 1952, spurred officials to consider the environmental effects of technology in a way not seen before. The City of London (Various Powers) Act of 1954 and the Clean Air Acts of 1956 and 1968 banned black smoke emissions and required urban area residents and factory operators to convert to smokeless fuels to heat homes and power manufacturing plants.

“As these residents and operators were necessarily given time to convert, however, fogs continued to be smoky for some time after the Act of 1956 was passed,” according to the British Public Weather Service. “In 1962, for example, 750 Londoners died as a result of a fog, but nothing on the scale of the 1952 Great Smog has ever occurred again.”

Following the Great Smog, researchers and environmental scientists studied the impact of power production methods on air pollution and their effects on society at large. However, this research was only done after the technology was in place and after the smog caused damage to the environment and public health.

Technologists and corporations are attempting to take the opposite approach when it comes to artificial intelligence (AI). While some products that use AI technology are already in the marketplace, its adoption is not widespread—yet. And humans have a unique window of time to debate how AI should be implemented.

One organization taking the helm on this is OpenAI, created in 2015 by a group of Silicon Valley investors. The nonprofit’s goal is to ensure that “artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity,” according to its mission statement. “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

OpenAI has so far been instrumental in the development of AI, such as creating the AI that generated coherent text at an unprecedented level in February 2019 and the AI that was the first to beat the world champions in an esports game.

To continue furthering OpenAI’s research and mission, Microsoft said it would invest $1 billion to support OpenAI’s plans to build AGI with “widely distributed economic benefits.”

“We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed,” said OpenAI co-founder Sam Altman—a Silicon Valley investor—in a statement to Reuters. “We are excited about how deeply Microsoft shares this vision.”

1219-Cyber Quote1-195w.jpgIn September 2019, the U.S. Chamber of Commerce’s Technology Engagement Center and Center for Global Regulatory Cooperation released its Artificial Intelligence Principles to govern the use and regulation of AI.

“The advent of artificial intelligence will revolutionize businesses of all sizes and industries, and has the potential to bring significant opportunities and challenges to the way Americans live and work,” said Tim Day, senior vice president of the Chamber Technology Engagement Center, in a press release. “The U.S. Chamber’s artificial intelligence principles place the Chamber at the forefront of the national conversations on AI and will serve as a comprehensive guide to address the policy issues pertaining to AI for federal, state, and local policymakers.”

The 10 AI principles include, but are not limited to, being mindful of existing rules and regulations, creating an AI-ready workforce, and supporting private and public investment in AI research and development.

Another principle is adopting risk-based approaches to AI governance, such as making sure an AI use case that poses a higher risk receives more scrutiny during development than other AI use cases.

“To avoid stifling innovation while keeping up the rapid pace of technological change, industry-led, voluntary accountability mechanisms should recognize the different roles companies play within the AI ecosystem and focus on addressing concrete harms to individuals that can be empirically linked to the use of AI technologies,” the Chamber of Commerce explained. “Any regulation of AI should be specific, narrowly tailored to appropriate use cases, and weighed against the economic and social benefits forfeited by its enactment.”

The Chamber of Commerce also included pursuing robust and flexible privacy regimes as one of its AI principles. This is because AI depends on data to function and storing and analyzing personal data significantly impacts consumers’ privacy.

“Clear and consistent privacy protections for personal privacy are therefore a necessary component of trustworthy AI,” the Chamber added. “Governments should pursue robust but flexible data protection regimes that enable the collection, retention, and processing of data for AI development, deployment, and use while ensuring that consumer privacy rights are preserved.”

The National Science Foundation (NSF) is also working to ensure that the data AI technology relies on is unbiased, said Dr. Dawn Tilbury, head of the NSF engineering directorate, at the POLITICO AI Summit in September.

The NSF partnered with Amazon to fund research on understanding fairness in AI systems and has invited researchers to submit proposals on how to understand if an AI system is fair.

“Algorithms that come out are built on data, so you have to think about how to quantify the fairness of a training set,” Tilbury said. This understanding is critical because all humans have bias, so existing human-created data sets used to train AI systems may be biased, she added.

For instance, take a human-collected data set used to create an algorithm to help justices make sentencing decisions after someone has been convicted of a crime.

“It would obviously be improper to use race as one of the inputs to the algorithm,” explained John Villasenor, Brookings nonresident senior fellow of governance studies at the Center for Technology Innovation, in a blog post on AI and bias. “But what about a seemingly race-neutral input such as the number of prior arrests? Unfortunately, arrests are not race neutral: There is plenty of evidence indicating that African Americans are disproportionately targeted in policing. As a result, arrest record statistics are heavily shaped by race.”

Because of this, an algorithm created using an existing prior arrest data set to provide sentencing guidance could unintentionally result in harsher sentences for African Americans than for other offenders.

The research project, dubbed the Program on Fairness in Artificial Intelligence, is designed to contribute to trustworthy AI systems that can be accepted and deployed to address large societal problems.

“Specific topics of interest include, but are not limited to transparency, explainability, accountability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity,” according to the NSF. “Funded projects will enable broadened acceptance of AI systems, helping the U.S. further capitalize on the potential of AI technologies.”

The U.S. government is also looking at how to implement and advance AI, but it currently lags far behind other nations—such as China—said Congressional AI Caucus members U.S. Representative Jerry McNerney (D-CA) and U.S. Representative Pete Olson (R-TX) at the POLITICO AI Summit.

“There is no real bold plan—there needs to be a roadmap, an initiative,” McNerney said. “We need standards and to make sure our data sets are good. We need to make the investment that as a society we want AI coming out of this country.”

This is critical to prevent AI from being used to subject a society to a security state—such as the surveillance, tracking, and analysis that AI is being used for in China to target minority groups and dissenters, McNerney said.

“China is our biggest competitor and they are ahead of us because they’re spending 80 percent more money than we are in D.C.,” Olson said. “We need to start investing.”

arrow_upward