Google Triumphs on the Nobel Stage as Tough Antitrust Fight Looms

Google, thanks to the tens of billions of dollars it makes every year from its online search business, has long pursued giant research projects that could one day change the world.

On Wednesday, the Nobel Prize committee conferred considerable prestige to Google’s pursuit of big ideas. Demis Hassabis, the chief executive of the Mountain View company’s primary artificial intelligence lab, and John Jumper, one of the lab’s scientists, were among a trio of researchers who received the Nobel Prize in Chemistry for their efforts to better understand the human body and fight disease through A.I.

The two Google scientists won their Nobels a day after Geoffrey Hinton, a former Google vice president and researcher, was one of two winners of the Nobel Prize in Physics for his pioneering work on artificial intelligence.

The Nobel wins were a demonstration of the growing role artificial intelligence is playing in areas far beyond the traditional world of the high-tech industry, and were a reminder of Silicon Valley’s influence in nearly every corner of science and the economy.

“This is the year the Nobel committee got A.I.,” said Oren Etzioni, a professor emeritus of computer science at the University of Washington. “These prizes are a conscious recognition of how influential A.I. has become in the scientific world.”

New York Times file photo

The triumphant moment for Google was tempered by concerns that the commercial success that has allowed the company to pursue these long-term projects is under threat by antitrust regulators. The Nobel awards were also a reminder of worries that the tech industry isn’t paying enough attention to the implications of its open-throttled pursuit of building more powerful A.I. systems.

“We might find ourselves in a situation in which not only the solutions but even the questions being asked are actually being provided by the A.I.,” said Mohammed AlQuraishi, a Columbia University biologist. “It’s going to be very interesting navigating that as scientists and as humans.”

On Tuesday evening, the Justice Department said it could ask a federal court to force Google into breaking off parts of the company or change how it operates in order to eliminate its monopoly in online search.

Google is also facing off with the Justice Department in a Virginia federal court over claims that it broke antitrust laws to dominate the technology that places ads on websites. Closing arguments in that case are expected next month. And on Monday, a federal judge in California ordered Google to let other companies place app stores on its Android operating system for three years as part of a third antitrust case.

Google is not the only big tech company getting squeezed by regulators. The Justice Department has also sued Apple, arguing that the company makes it tough for customers to ditch its suite of devices and software. The Federal Trade Commission has filed antitrust lawsuits against Meta, saying it snuffed out competition when it bought Instagram and WhatsApp; and Amazon, arguing the company’s practices artificially inflate prices for products online.

As the largest tech companies fight off concerns over monopolist behavior, they are going all-in on A.I. — so much so that regulators are arguing that the companies must be reined in now before they use their power to take control of the young market for A.I. systems.

“A.I. is coming to chemistry and going to Washington,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab. “You may not be interested in A.I. but A.I. is interested in you.”

In its Tuesday court filing, the Justice Department said it believed that any efforts to tame Google’s search monopoly should take into account its ability to “leverage its monopoly power to feed artificial intelligence features.”

The Justice Department said it was considering asking the U.S. District Court for the District of Columbia, which in August agreed with the government that Google abused a search monopoly, to take steps to limit Google’s power in the new technology, including allowing websites to opt out of having their content used in the development of Google’s artificial intelligence systems.

The Federal Trade Commission and the Justice Department this year reached a separate deal clearing the way for them to investigate other companies focused on A.I. development. The Justice Department has opened an inquiry into Nvidia, which makes computer chips essential to the technology, while the F.T.C. will be responsible for investigating Microsoft and its partner, the San Francisco company OpenAI.

(The New York Times sued OpenAI and Microsoft in December over copyright infringement of news content related to A.I. systems.)

In the early 1960s, when computer science was emerging as a field, the standard put down was that any academic discipline that put “science” in its name wasn’t one. A computer, skeptics said, was a mere tool like a test tube or a microscope.

But as the technology has progressed, accelerated by recent advances in artificial intelligence, computer science has become a driving force behind discoveries across the sciences — in astronomy, biology, chemistry, medicine and physics.

“Chatbots are how most people know A.I., but the technology’s ability to speed scientific discovery is much more important,” Mr. Brynjolfsson said.

After OpenAI released its ChatGPT chatbot in late 2022, igniting an industry wide A.I. boom, some researchers turned up the volume on their concerns about how the technology could be used.

Hinton left Google, using his retirement as an opportunity to speak freely about his worry that the race toward A.I. could one day be catastrophic. He said on Tuesday that he hoped “having the Nobel Prize could mean that people will take me more seriously.”

Leading researchers such as Hassabis often describe artificial intelligence as a way to cure disease, battle climate change and solve other scientific mysteries that have long bedeviled the world’s researchers. The work that won a Nobel was a significant step in that direction.

DeepMind, Google’s main A.I. lab, created technology called AlphaFold that can rapidly and reliably predict the physical shape of proteins — the microscopic mechanisms that drive the behavior of the human body and all living things. By pinpointing protein structures, scientists can more quickly develop medicines and vaccines and tackle other scientific problems.

In 2012, Hinton, then a professor at the University of Toronto, published a research paper with two of his graduate students that demonstrated the power of an A.I. technology called a neural network. Google paid $44 million to bring them to the company.

About a year later, Google paid $650 million for Hassabis’s four-year-old start-up, DeepMind, which specialized in the same kind of technology. Hinton and Hassabis were part of a small academic community that had nurtured neural networks for years while the rest of the world had largely ignored it.

Hinton, 76, liked to call Hassabis, 48, his “grand-post-doc” because he had overseen the postdoctoral work of the academic who later oversaw Hassabis’s research.

Hassabis also worries that A.I. could cause a range of problems or even threaten humanity if it is not carefully controlled. But he thinks that staying with a company is the best way to make sure its A.I. doesn’t cause problems.

A Google spokeswoman, Jane Park, said in a statement on Wednesday, “As a field, we have to proceed with cautious optimism and engage in a conversation with wider society about the risks in order to mitigate them, and unlock A.I.’s incredible ability to accelerate scientific discovery.”

When Google acquired DeepMind, Hassabis and his co-founders asked for assurances that Google would not use DeepMind’s technologies for military purposes and that it would establish an independent board that would work to ensure that its technologies were not misused.

“Of course it’s a dual-purpose technology,” Hassabis said during a news conference after winning the Nobel Prize. “It has extraordinary potential for good, but also it can be used for harm.”

Cade Metz, Steve Lohr and David McCabe are reporters with The New York Times. Teddy Rosenbluth contributed reporting. Copyright 2024, The New York Times. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *