Like a spaceship full of aliens landing on Earth, artificial intelligence technology seems to have come out of nowhere and instantly changed everything.

From A.I.-generated music that expertly mimics your favorite singer to virtual romantic partners, artificial intelligence technology is mesmerizing, scary, and increasingly accessible.

Businesses aren’t wasting any time pumping money into the technology. In addition to Microsoft’s $13 billion bet on ChatGPT-maker OpenAI, startups like Anthropic, Cohere, Adept AI, Character.AI, and Runway have raised hundreds of millions of dollars apiece in recent months.

As with much of the tech business, the people responsible for the innovation in A.I. are as central to the story as the technology itself. The names of today’s A.I. innovators aren’t as familiar as the established members of the tech industry pantheon, but the influence of these computer scientists and technologists is quickly spreading through their work.

Given how profound and potentially risky their work’s impact on society could be, many of these A.I. innovators have strongly held—and often conflicting—opinions about the technology’s future, its power, and its dangers.

Fortune took a look at some of the key figures setting the A.I. agenda through their work and their viewpoints. Some work at big companies, some at startups, and some in academia; Some have been toiling for years in specialized branches of A.I., while others are more recent converts. If they have one thing in common, it’s their unique ability to influence how this powerful technology affects the world. Here, listed in no particular order, are 13 of today’s most important A.I. innovators.


Daniela Amodei

Cofounder, Anthropic

“It kind of blows my mind that A.I., given the potential reach that it could have, is still such a largely unregulated area.”source

Courtesy of Anthropic

Daniela Amodei and her brother Dario quit their jobs at OpenAI to cofound Anthropic at the end of 2020, reportedly because of concerns that OpenAI’s deal with Microsoft would increase pressure to release products quickly at the expense of safety protocols.

The company’s chatbot, called Claude, is similar to OpenAI’s ChatGPT but is trained with a technique referred to as “constitutional AI,” which sets principles like choosing responses that are, according to the company, the “least racist and sexist” and encouraging of life and liberty. The approach is based on what Amodei, 35, refers to as Anthropic’s triple H framework for A.I. research: helpful, honest, and harmless.

“It kind of blows my mind that A.I., given the potential reach that it could have, is still such a largely unregulated area,” Amodei said in an interview last year, expressing hope that standard setting organizations, industry groups, and trade associations will step into the breach and provide guidance on what a safe model looks like. “We need all those actors working together to get to the positive outcomes we’re all hoping for.”

In addition to developing a “next-gen algorithm” for its Claude chatbot, Anthropic has been hard at work raising capital. It recently raised $450 million from backers including Google, Salesforce, and Zoom Ventures (less glamorously, it should be noted that an earlier, $580 million funding round that Anthropic raised was led by disgraced crypto entrepreneur Sam Bankman-Fried’s Alameda Research Ventures. Anthropic has not said whether it will return the money).


Yann LeCun

Chief A.I. scientist, Meta

“The upcoming AI systems are going to be an amplification of human intelligence in the way that mechanical machines have been an amplification of physical strength. They’re not going to be a replacement.”source

Marlene Awaad—Bloomberg/Getty Images

“Prophecies of AI-fueled doom are nothing more than a new form of obscurantism,” says the French-born LeCun in a preview for an upcoming debate in which he’ll square off against an MIT researcher about whether A.I. poses an existential threat to humanity.

An outspoken advocate that A.I. has the power to amplify human intelligence, LeCun, 62, is widely respected as one of the leading experts in the field of neural networks, which have allowed for breakthroughs in computer vision and speech recognition. His work on a foundational neural network design known as a convolutional neural network and broadening the vision of such networks earned him the 2018 Turing Award, considered the Nobel prize of computing, alongside fellow deep learning pioneers Geoff Hinton and Yoshua Bengio.

Needless to say, LeCun was not among the more than 200 signatories of the recent warning that A.I. poses an extinction level risk to humanity.

A longtime computer science professor at New York University, LeCun joined Facebook (now Meta) in 2013 and now oversees the $700 billion company’s various artificial intelligence efforts. That hasn’t diminished his appetite for engaging in the major debates about A.I., such as the concerns that the technology will take people’s jobs. In a Q&A for Martin Ford’s 2018 book Architects of Intelligence: The Truth About AI from the People Building it, LeCun took issue with a famous prediction of Hinton’s that radiologists, for example, would be out of a job thanks to A.I. Rather, he explained it would free up radiologists time to spend with patients. He went on to say that he imagines some activities will become more expensive like eating at a restaurant where a waiter serves food that a human cook prepared. “The value of things is going to change, with more value placed on human experience and less to things that are automated,” he told Ford.


David Luan

CEO and cofounder, Adept

“The pace of progress in AI is astounding. First text generation, then image generation, now computer use.” source

Courtesy of Adept

Before cofounding Adept in 2022, Luan worked at some of the most important A.I. companies, including OpenAI and Google (he also did a brief stint as the director of A.I. at Axom, the maker of the Taser gun and police body cameras). He says the current moment in A.I. is the one he’s most excited about. “We’ve entered the industrialization age of AI. It’s now time to build factories,” Luan said at the Cerebral Valley A.I. Summit earlier this year.

The idea behind Adept is to provide people with an “AI teammate” that can perform computer-based tasks—for example, building a financial model on a spreadsheet—with a few simple text commands. In March, the company raised $350 million in funding at a valuation pegged at more than $1 billion by Forbes.

Luan, 31, said that he spends a lot time thinking about the concerns that A.I. could replace people’s jobs, but that for the “knowledge workers”—the customers that generative A.I. tools like Adept are focused on—the fears are overblown. “Instead of spending like 30 hours of your week updating Salesforce, you spend 1% of your week asking Adept to just do that for you and you spend 99% of the time talking to customers,” Luan said at the Cerebral Valley A.I. Summit.


Emad Mostaque

CEO, Stability AI

“If we have agents that are more capable than us that we cannot control that are going across the internet and [are] hooked up and they achieve a level of automation, what does that mean?”source

Emad Mostaque, founder and CEO of Stability AI, at Fortune’s Brainstorm A.I. in December 2022.

Nick Otto for Fortune

Mostaque was born in Jordan but grew up in Bangladesh and the UK, where he earned his bachelor’s degree in computer science at Oxford University in 2005. Before founding Stability AI in 2020, he spent more than a decade working in hedge funds, according to the New York Times. The stint in finance seems to have provided a nice cushion to start Stability AI, which he reportedly funded himself and later with funding from investors including Coatue and Lightspeed Venture Partners.

The company helped to create text-to-image model, Stable Diffusion, which has been used to generate images that pay little heed to intellectual property rights or to concerns about depicting violence (the product, like some other A.I. tools, has also been criticized for amplifying racial and gender bias). For Mostaque, the priority is to keep the model open-source and without guardrails that restrict what content the model can generate—although, in an effort to make Stability’s A.I. more commercially-attractive, he did later train a version of Stable Diffusion on a dataset that had been filtered to remove pornographic images. “We trust people, and we trust the community,” he told the Times.

That attitude (as well allegations that Mostaque has exaggerated some of his accomplishments, as recently detailed by Forbes) has drawn backlash from others in the A.I. community, public officials, and firms like Getty Images which sued Stability AI for copyright infringement in February, alleging that the company copied 12 million images to train its AI model without a legal basis for using them.

Yet Stability AI’s tools have emerged as among the most popular and well-known representatives in the field of generative A.I. And Mostaque, aged 40 and based in London, defies easy categorization. In March, he was among a group who signed an open letter calling for pause in A.I. development for anything more advanced than GPT-4, the A.I. chatbot from OpenAI. His perspective on A.I. advancements seems to be at two extremes given his recent comments that it could control humanity in the worst case scenario, while stating on another occasion, that A.I. will be disinterested in people. 

“Because we can’t conceive of something more capable than us, but we all know people more capable than us. So, my personal belief is it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix: Humans are a bit boring, and it’ll be like, ‘Goodbye’ and ‘You’re kind of boring.’”


Fei-Fei Li

Co-director, Stanford’s Institute for Human-Centered AI

“It still feels surreal to be born into this time of history and be in the middle of this technology.”source

David Paul Morris—Bloomberg/Getty Images

When Li immigrated from China to the U.S. with her family at 16, she says she had to learn English from scratch while working to get good grades. Today, the co-director for Stanford’s Institute for Human-Centered AI is considered one of the leading lights on the ethical use of A.I.—through writings like “How to make A.I. that’s good for people” —as well as an advocate for diversity in the A.I. field.

Early in her career, Li built ImageNet, a large-scale dataset that has contributed to developments in deep learning and A.I. Now, at Stanford, she’s been researching “ambient intelligence,” which uses A.I. to monitor activity at homes and hospitals. She discussed her work and how bias is critical to consider during Fortune’s Brainstorm AI conference in December.

“I work a lot in health care. It’s very obvious that if our data comes from certain populations or socio-economic classes, it will have a pretty profound downstream impact,” she said.

According to Li, 47, Stanford now conducts an ethics and society review process for A.I. research projects. “It gets us thinking about how to design fairness, design privacy awareness, and design human well being and dignity into our technology.”

To boost inclusion in the A.I. field, Li co-founded a non-profit known as AI4ALL, which promotes diversity in A.I. education.

One note of controversy in Li’s career occurred during her stint as chief scientist of AI/ML at Google Cloud, when a Google contract to provide A.I. tech to the Pentagon caused an uproar among some employees in 2018. While the contract was not Li’s doing, critics felt her association with it—particularly some of her comments in leaked emails about how to portray the contract to the public—was at odds with her work as an advocate of ethical A.I.


Ali Ghodsi

CEO, Databricks

“We should embrace it, because it is here to stay. And I do think it’s going to change everything, and I think it’s going to be mostly positive.”source

Courtesy of Databricks

Ali Ghodsi straddles academia and business, with a foot in each world as an adjunct professor at UC Berkeley and the cofounder and CEO of Databricks. One principle that’s central to the Swedish-Iranian tech exec is his committment to open source development.

Ghodsi’s work on open source data processing tool Apache Spark provided the foundation for Databricks, which is valued at $38 billion. In April, Databricks released Dolly 2.0, an open source rival to ChatGPT, that uses a question-and-answer instruction set created entirely from interactions between Databricks’ 5,000 employees. This means that any company can weave Dolly 2.0 into its own commercial products and services without any cap on usage.

Dolly is more proof of concept than viable product—the model is prone to errors, hallucinations and churning out toxic content. Dolly’s importance, however, is that it showed that A.I. models can be much smaller and cheaper to train and run than the massive proprietary large language models that underpin OpenAI’s ChatGPT or Anthropic’s Claude. And Ghodsi defends making Dolly so freely and easily accessible. “We’re committed to developing AI safely and responsibly and believe as an industry, we’re moving in the right direction by opening up models, like Dolly, for the community to collaborate on,” Ghodsi told TechCrunch in April.

While generative A.I. is getting a lot of the attention right now, Ghodsi, 45, believes that other types of artificial intelligence, particularly A.I. for data analysis, will have a profound effect across industries. “I think this is just the very beginning, and we are just scratching the surface on what A.I. and data analytics can do,” he told Fortune in March.


Sam Altman

CEO, OpenAI

“If someone does crack the code and builds a superintelligence, however you want to define that, probably some global rules on that are appropriate.”

Eric Lee—Bloomberg/Getty Images

Altman founded OpenAI with Elon Musk, Ilya Sutskever, and Greg Brockman in 2015, out of a fear that Google would become too powerful and control A.I.

Since then, OpenAI has turned into one of the most influential companies in the A.I. arena and emerged as the standard bearer for “generative A.I.”: Its ChatGPT tool is the fastest growing app of all time, having garnered 100 million monthly active users just two months after its launch. DALL-E 2, another OpenAI product, is one of most popular text-to-image generators, capable of producing high-resolution images that have depth-of-field effects with shadows, shading, and reflections.

While he’s not an A.I. researcher or a computer scientists, Altman, 38, sees the tools as a stepping stone on a mission he shares with others in the field: developing a computer superintelligence known as artificial general intelligence, or AGI. He believes that “AGI is probably necessary for humanity to survive,” but has suggested he’ll be cautious as he works toward it. 

Altman’s quest for AGI has not blinded him to the risks: he was among the most prominent names to sign the Center for AI safety warning about A.I.’s threat to humanity. At a hearing before U.S. senators in mid-May, Altman called for A.I. regulation, saying rules should be created to incentivize safety “while ensuring that people are able to access the technology’s benefits.” (Some critics speculated that the regulation he called for could also create hurdles to a growing crop of open source competitors to OpenAI).

A former president of startup incubator Y Combinator, Altman is skilled at raising money according to a profile by Fortune’s Jeremy Kahn. That knack appears to have paid off big time with OpenAI’s $13 billion alliance with Microsoft.

While Musk is no longer affiliated with OpenAI and is reportedly launching a rival A.I. lab, Altman still cites Musk as a mentor who taught him to push the limits on “hard R&D and hard technology.” He has no plans to follow Musk on mission to Mars however: “I have no desire to go live on Mars, it sounds horrible. But I’m happy other people do.”


Margaret Mitchell

Chief ethics scientist, Hugging Face

“People say or think, ‘You don’t program, you don’t know about statistics, you are not as important,’ and it’s often not until I start talking about things technically that people take me seriously which is unfortunate. There is a massive cultural barrier in ML.”source

Courtesy of Clare McGregor/Partnership on AI

Margaret Mitchell’s interest in A.I. bias began after a couple of troubling instances while working at Microsoft. The data she worked with for the company’s Seeing AI assistance technology, for example, expressed odd descriptions of people’s race, she recalled in an interview last year. Another time, she fed a system images of an explosion and the output described the wreckage as beautiful. 

She realized it wouldn’t satisfy her to simply make A.I. systems perform better on benchmarks. “I wanted to fundamentally shift how we were looking at these problems, how we were approaching data and analysis of data, how we were evaluating and all of the factors we were leaving out with these straightforward pipelines,” she said.

That mission has come at a personal cost. Mitchell made headlines in 2021 when Google fired her and Timnit Gebru from their jobs as co-heads of the company’s A.I. ethics unit. The pair had published a paper detailing risks of large language models, including the environmental cost and racist and sexist language being funneled into training data. They were also outspoken about insufficient diversity and inclusion efforts at Google and clashed with management over company policies.

Mitchell and Gebru had already achieved significant breakthroughs in the A.I. ethics field, like publishing a paper with multiple other researchers on so-called “model cards,” which encourage more transparency on models by providing a way to document performance and identify limitations and biases.

At Hugging Face, an open-source platform provider of machine learning tech she joined after Google, Mitchell has worked intensely on assistive tech and deep learning, and focuses on coding to help build protocols for matters like ethical A.I. research and inclusive hiring.

Despite her background as a researcher and scientist, Mitchell says her focus on ethics leads people to assume she doesn’t know how to program. “It’s often not until I start talking about things technically that people take me seriously which is unfortunate,” Mitchell said on a Hugging Face blog last year. 


Mustafa Suleyman

Cofounder and CEO, Inflection AI

“Unquestionably, many of the tasks in white-collar land will look very different in the next five to 10 years.”source

Marlene Awaad—Bloomberg/Getty Images

Known to friends and colleagues as “Moose,” Suleyman previously worked at Google as VP of AI Products and AI Policy, and co-founded DeepMind, a research lab that was bought by Google in 2014. Since his time at Google, Suleyman has worked for VC firm Greylock and launched a machine learning startup known as Inflection AI.

Earlier this month, Inflection released its first product, a chatbot named Pi for “personal intelligence.” The current version of the bot can remember conversations with users and offer empathetic responses. Eventually, Suleyman says it will be capable of serving as a personal “Chief of Staff” that can book restaurant reservations and handle other daily tasks.

Suleyman, 38, is enthusiastic about what language we’ll start using to engage with computers. For Wired, he wrote that we’ll someday have “truly fluent, conversational interactions with all our devices,” which will redefine human-machine interaction. 

Suleyman envisions a future where A.I. will cause white collar work to look very different, but also sees potential for it to ​​handle big challenges. On the latter, he thinks the technology can lower the cost of materials for housing and infrastructure and help allocate resources like clean water. Still, he’s a proponent of avoiding harms on the way, writing a warning in the Economist in 2018:

“From the spread of facial recognition in drones to biased predictive policing, the risk is that individual and collective rights are left by the wayside in the race for technological advantage.”


Sara Hooker

Director, Cohere For AI

“Part of what I think is going to be really important, especially when you think about things like misinformation or the ability to generate texts that might be used in nefarious ways, is we need better traceability.” source

Courtesy of Cohere For AI

A former researcher at Google Brain, Sara Hooker reunited with her ex-colleagues last year when she joined Cohere, a Toronto startup dedicated to ultra language models and founded by Google Brain alums. It’s an arms-length reunion though—Hooker is heading up a non-profit research lab called Cohere for AI that’s funded by Cohere but operates independently.

Cohere for AI describes its mission as “solving complex machine learning problems.” In practice that means everything from research papers to make LLMs safer and more efficient to the Scholars Program, which seeks to broaden the pool of people involved in A.I. by recruiting talent from all over the world.

One of the criteria to be eligible for the Scholars Program is that a person has not previously published a research paper on machine learning.

“When I talk about improving geographic representation, people assume this is a cost we are taking on. They think we are sacrificing progress,” Hooker says. “It is completely the opposite.” Hooker would know. She grew up in Africa, and helped establish Google’s research lab in Ghana.

Hooker also pushes for ML models and algorithms that are accurate and explainable. Speaking to Global News recently, Hooker shared her thoughts on “model traceability,” or the ability to trace when a text is generated by a model instead of a human, and how improvements should be made to it. “Part of what I think is going to be really important, especially when you think about things like misinformation or the ability to generate texts that might be used in nefarious ways, is we need better traceability,” she said.

And with Cohere having recently raised $270 million in funding from Nvidia, Oracle, and Salesforce Ventures, Hooker’s non-profit lab is tied to a startup with some marquee backers.


Rummann Chowdhury

Scientist at Parity Consulting and Responsible AI Fellow, Harvard University’s Berkman Klein Center

“There’s rarely the fundamental question asked: should this thing even exist?”source

Courtesy of Rumman Chowdhury

Chowdhury’s career in A.I. kicked off as a leader for responsible AI at Accenture, where she oversaw design of an algorithmic tool to identify and mitigate bias in AI systems. She left to found an algorithmic auditing company known as Parity AI, and it was later acquired by Twitter. There, she directed the ML Ethics, Transparency, and Accountability team, which was a group of researchers and engineers that worked to mitigate algorithmic harms on the social platform, something she says became challenging after Twitter was acquired by Elon Musk.

She played a leading role among a group of top A.I. developers who got support from the White House to put on a generative A.I. “red teaming” event that aims to improve security by evaluating models from Anthropic, Google, Hugging Face, OpenAI, and others for quirks and limitations during the DEF CON 31 cybersecurity conference in August.

As another A.I. expert on the regulation train, Chowdhury, 43, wrote in Wired recently that there ought to be a generative A.I. global governance body. She pointed to Facebook’s Oversight Board, which is an interdisciplinary global group focused on accountability, as an example of what the body could look like.

“An organization like this should be a consolidated ongoing effort with expert advisory and collaborations, like the IAEA, rather than a secondary project for people with other full-time jobs,” Chowdhury wrote. “Like the Facebook Oversight Board, it should receive advisory input and guidance from industry, but have the capacity to make independent binding decisions that companies must comply with.” 

She’s also pushed for what she calls integrated bias assessments and audits in the product development process, which would allow an inspection of something that’s already been built but also having mechanisms in place from the early stages to decide whether something should make it past the idea phase.

“There’s rarely the fundamental question asked: should this thing even exist?” she said during a panel discussion on responsible A.I.


Cristóbal Valenzuela

Cofounder and CEO, Runway ML

“The history of generative art is not new. The idea of involving an autonomous system in the art-making process has been around for decades outside of the recent AI boom. What’s different is that now we are entering a synthetic age.”source

Courtesy of Runway

Valenzuela got into A.I. after learning about neural networks through the work of artist and programmer Gene Kogan. He became so fascinated that he left his home in Chile to become a researcher at NYU Tisch’s Interactive Telecommunications Program.

It was there that the idea for Runway came to him as he worked to make machine learning models accessible to artists. “I started brainstorming ideas around that and then I realized that “a platform for models” already has a name: a runway,” he told cloud computing company Paperspace.

While many artists have embraced A.I., using tools like Runway’s for visual effects in movies or creating photographs, the 33-year old Valenzuela wants even more artists to embrace A.I. 

So, the company helped develop text-to-image model Stable Diffusion. It also made solo feats with its A.I. video editing model, Gen-1 that could improve existing video fed by users. Gen-2 followed this spring, providing users the chance to generate videos from text. Given that entertainers like Weezer have taken advantage of its models by having it make a tour promo video for the rock band and another artist made a short film using the latter model, tools like Runway’s have gotten buzz for their potential to change how Hollywood approaches filmmaking.

In a talk with MIT, he said the company is working on helping artists find the use cases for their work and reassure them that their jobs won’t be taken. He also argues that in many cases, we’re already using A.I. for artwork even if we don’t realize since a photo taken on an iPhone can involve multiple neural networks to optimize an image. 

“It’s just another technology that will help you do things in a better way and express you better,” he said.


Demis Hassabis

CEO, Google DeepMind

“At DeepMind, we’re quite different from other teams in that we’re pretty focused around this one moonshot goal of AGI. We’re organized around a long-term roadmap, which is our neuroscience based thesis, which talks about what intelligence is and what’s required to get there.” source

With a PhD in cognitive neuroscience from the University College London, Hassabis made waves by cofounding neural networking startup DeepMind more than a decade ago. The company, which was acquired by Google in 2014, aims to build powerful computer networks that mimic the way the human brain works. In April, Hassabis took command of Google’s overall A.I. efforts, after a reorg that merged the internet giant’s various A.I. teams.

Hassabis says he got into programming through his love of chess. The former child chess prodigy even bought his first computer from the winnings of chess tournaments. Now, he uses the problem solving and planning required from the game plus his neuroscience background in his work on A.I., which he believes is going to be “the most beneficial thing to humanity ever.”

He thinks AGI could happen within a decade, and describes DeepMind as neuroscience-inspired A.I. and one of the best ways to address complex questions about the brain. “We could start shedding light on some of the profound mysteries of the mind like the nature of consciousness, creativity, and dreaming,” he told Ford. And when it comes to whether machine consciousness is possible, he says he’s open minded about that, but thinks “it could well turn out that there’s something special about biological systems” that machines couldn’t match.

In 2016, DeepMind’s A.I. system AlphaGo beat Lee Sedol, the world’s top human player at the strategy game Go, in which players place stones on a 19 by 19 grid, in a best-of-five match viewed by more than 200 million online. Sedol’s defeat to the system was especially shocking since experts said such an outcome wasn’t expected for another decade.

Moments like that have made DeepMind the leading face of AGI. But it’s not all games. DeepMind is behind AlphaFold 2, an A.I. system has predicted the 3-D structures of almost every known protein. DeepMind has made these predictions available in a public database. It’s a discovery that could accelerate drug development and that earned Hassabis and senior staff research scientist John Jumper a $3 million Breakthrough Prizes in Life Sciences award. Hassabis also co-founded and runs a new Alphabet-owned company, Isomorphic Labs, dedicated to using A.I. to help in drug discovery.





Source link

Leave A Reply

Please enter your comment!
Please enter your name here