This past summer, Meta CEO Mark Zuckerberg extended a lucrative invitation to Rishabh Agarwal, offering him a substantial package of millions in stock and salary to join the company’s ambitious new AI lab.
Zuckerberg’s vision for this lab was clear: to forge ‘superintelligence,’ a technology capable of surpassing the human brain. Despite the unknown path to achieving such a feat, he encouraged Dr. Agarwal to embrace the challenge with a bold leap of faith.
In a rapidly evolving world, Zuckerberg emphasized that the greatest risk one could take was, paradoxically, to avoid taking any risk at all.
However, despite already being a Meta employee, Dr. Agarwal declined the enticing offer, opting instead to join a different venture.
Dr. Agarwal is just one of over 20 researchers who have recently departed from prominent AI projects at Meta, OpenAI, Google DeepMind, and other major tech firms. These brilliant minds are now converging at a new Silicon Valley startup, Periodic Labs, with many sacrificing tens—even hundreds—of millions of dollars in the process.
While established AI labs pursue lofty, somewhat undefined goals like ‘superintelligence’ and ‘artificial general intelligence,’ Periodic Labs is charting a more focused course. Their mission: to develop AI technology specifically designed to fast-track groundbreaking scientific discoveries in fields such as physics and chemistry.
Liam Fedus, a co-founder of Periodic Labs, asserts that the true purpose of AI isn’t merely to automate white-collar tasks, but rather to significantly advance scientific progress.
Fedus himself was part of the small, groundbreaking team at OpenAI responsible for creating the widely popular online chatbot, ChatGPT, in 2022. He departed OpenAI in March to establish Periodic Labs alongside Ekin Dogus Cubuk, a former researcher at Google DeepMind, Google’s premier AI division.
Indeed, several prominent AI companies are already exploring technologies aimed at accelerating scientific discovery. Notably, two researchers from Google DeepMind were recently awarded a Nobel Prize for their work on AlphaFold, a project that offers significant, albeit incremental, advancements in drug discovery.
Experts often contend that large language models (LLMs), the core technology behind chatbots, are poised to deliver major scientific breakthroughs. Both OpenAI and Meta claim their LLMs are already contributing to progress in areas such as drug discovery, complex mathematics, and theoretical physics.
OpenAI spokesman Laurance Fauconnet stated, ‘We believe advanced A.I. can accelerate scientific discovery, and OpenAI is uniquely positioned to lead this charge.’
However, Fedus challenges this perspective, suggesting that these companies are not truly on the path to genuine scientific advancement. He criticized Silicon Valley’s approach to the future of large language models as ‘intellectually lazy.’ Instead, he and Dr. Cubuk are channeling a bygone era when tech giants like Bell Labs and IBM Research considered the physical sciences a fundamental aspect of their research endeavors.
The AI systems underpinning chatbots like ChatGPT are known as neural networks, a name inspired by the intricate neuron networks of the human brain. These systems process colossal amounts of text data from the internet, identifying patterns that enable them to emulate human language generation. They can even extend this capability to writing computer code and solving complex mathematical equations.
Yet, Fedus and Dr. Cubuk firmly contend that merely analyzing countless textbooks and academic papers won’t lead these systems to truly master scientific discovery. For that, they argue, AI technologies must directly engage with and learn from physical experiments conducted in the real world.
Dr. Cubuk highlights that chatbots, much like humans, cannot simply ‘reason for days’ and spontaneously generate a groundbreaking discovery. Real scientific progress, he explains, often involves numerous trial experiments, even if a significant breakthrough isn’t guaranteed.
Periodic Labs has already secured over $300 million in seed funding, notably from venture capital firm a16z. While currently operating from a San Francisco research lab, the company intends to construct its own dedicated facility in Menlo Park, California, where physical robots will conduct scientific experiments on an unprecedented scale.
Here, human researchers will design and oversee the experiments, while AI systems simultaneously analyze both the experimental process and its outcomes. The ultimate aspiration is for these AI systems to autonomously learn and initiate similar experiments.
Much like how neural networks acquire skills by recognizing patterns in vast text datasets, they are also capable of learning from diverse data types, such as images, sounds, and physical movements. Moreover, these systems can process and learn from multiple data streams concurrently.
For example, by analyzing a collection of photographs alongside their descriptive captions, an AI system can decipher the underlying relationships between visual and textual information, learning that the word ‘apple’ corresponds to a round, red fruit.
At Periodic Labs, AI systems will be trained using a combination of scientific literature, hands-on physical experimentation, and iterative refinement of those experiments.
Imagine a robot at Periodic Labs conducting thousands of experiments, meticulously combining various powders and materials. Its goal: to engineer a novel superconductor, a material with immense potential for advanced electrical equipment.
Under the guidance of human scientists, this robot might select promising powders based on existing research, blend them in a flask, heat them in a furnace, rigorously test the resulting material, and then repeat the entire cycle with new combinations of powders.
Through extensive analysis of this scientific trial-and-error process—identifying successful patterns—an AI system could theoretically learn to automate and significantly speed up analogous experiments.
Dr. Cubuk clarifies, ‘It won’t make the discovery on the first try, but it will iterate.’ He explains that by repeating the process relentlessly, they aim to accelerate the pace of breakthroughs.
While AI researchers have pondered such concepts for years, the immense computing power and resources required to realize this vision have only recently become accessible.
Nevertheless, bringing this technology to fruition presents formidable challenges, being both incredibly difficult and time-consuming. Crafting AI for the physical realm is considerably more complex than for the purely digital.
Oren Etzioni, founding chief executive of the Allen Institute for AI, tempers expectations: ‘Is this going to solve cancer in two years? No.’ Yet, he concludes optimistically, ‘But is this a good, visionary bet? Yes.’