Meet The Humans Trying To Keep Us Safe From AI
Just a year ago, the idea of having meaningful conversations on a computer was science fiction. Since the launch of ChatGPT OpenAI last November, life has become more like a fast-paced techno thriller. Chatbots and other generative AI tools are beginning to change the way people live and work. But whether that plot will be hilarious or dystopian will depend on who helps write it.
Fortunately, as AI grows, so do the number of people creating and learning it. It's a more diverse group of leaders, researchers, entrepreneurs and activists than those who founded ChatGPT. Although the AI community is still dominated by men, in recent years several researchers and companies have tried to make it more welcoming to women and other underrepresented groups. And the field now includes a large number of people who aren't just building algorithms or making money, mostly thanks to movements led by women, who are exploring the ethical and social implications of technology. Here are some of the people who made up this acceleration story. - Will Knight
About art
"I wanted to use generative intelligence to reveal the potential and anxiety as we explore our relationship with this new technology," says artist Sam Cannon, who worked with four photographers to perfect the portraits against AI-generated backgrounds. : "It's like a conversation: I give the AI images and ideas, and in return the AI presents itself."

Ruman Chaudhuri led Twitter's ethical AI research until Elon Musk bought the company and fired his team . He is the co-founder of Humane Intelligence, a non-profit organization that uses crowdsourcing to discover vulnerabilities in artificial intelligence systems by creating competitions where hackers challenge algorithms to misbehave. The first event, planned for this summer and supported by the White House, will test generative artificial intelligence systems from companies including Google and OpenAI. Chaudhury said large-scale public trials are necessary because of the far-reaching implications of AI systems; "If the results will have a big impact on society, won't the best experts also have an important place in society?" -Harry Johnson

Sarah Bird's job at Microsoft is to build artificial intelligence, which the company adds to its office applications and other products without going off the rails. As he sees the text generators behind Bing's chatbots becoming more capable and useful, he sees them getting better at spreading biased content and malicious code. His team works to curb the dark side of technology. Baird said AI could change many lives for the better, but "none of that is possible if people worry that the technology will produce stereotypical results." -KJ

Eugene Choi , a professor at Washington University's School of Computer Science and Engineering, is developing an open-source model called Delphi designed to provide a sense of right and wrong. He wondered how people received Delphine's moral statements. Choi wants a system that doesn't require as many resources as OpenAI and Google. "The current focus is very unhealthy for several reasons," he said. "It's a total concentration of forces, it's very expensive and it's not the only way." - TOILET:

In 2017, Margaret Mitchell founded the Google Ethics AI research group . He was fired 4 years after arguing with his superiors over a paper he wrote. He warns that the large language model ChatGPT technology can reinforce stereotypes and cause other ills. Mitchell is now the head of ethics at Hugging Face, a startup that develops open-source artificial intelligence software for developers. He works to ensure that the company's releases do not create unpleasant surprises and tries to put people over algorithms. Generative models can be useful, he says, but they can also distort the sense of truth; "We are facing the danger of losing contact with historical facts." -KJ

When Inioluwa Deborah Raji started working with artificial intelligence, she was working on a project to detect changes in facial analysis algorithms. they are accurate, at least for dark-skinned women. The findings prompted Amazon, IBM and Microsoft to stop selling facial recognition technology. Raji is currently working with the Mozilla Foundation on an open source tool to help test AI systems for errors such as bias and inaccuracy, including extensive language models. Raji said the tool could help communities affected by AI challenge the claims of powerful tech companies. "People actively deny harm," he says, "so gathering evidence is integral to any progress on this front." -KJ

Daniela Amadei previously worked on AI policy at OpenAI, helping to found ChatGPT. But in 2021, he and several others left the company to start a for-profit company called Anthropic, which developed its own approach to AI security. Chatbot startup Claude has a "constitution" that governs its behavior based on sources including the UN's Universal Declaration of Human Rights. Amadei, president and co-founder of Anthropic, said the idea would reduce bad behavior today and likely help prevent stronger AI systems in the future; "It may be important to think long-term about the potential impact of this technology." - TOILET:

Leela Ibrahim is the executive director of Google's DeepMind research unit, the center of Google's generative artificial intelligence project . He considers running one of the world's most powerful artificial intelligence laboratories not a job, but a spiritual calling. Ibrahim joined DeepMind five years ago after nearly two decades at Intel, hoping to help advance artificial intelligence in ways that benefit society. One of his roles is to chair an internal review committee that discusses ways to maximize the benefits of the DeepMind project and avoid bad outcomes. “I thought it would be very useful to be here if I could bring some of my experience and knowledge to introduce this technology to the world in a more responsible way. -Morgan Meeker
This article appears in the July/August 2023 issue. Subscribe now .
Let us know what you think about this article. Write to the editor at mail@wired.com .