This past summer, Meta CEO Mark Zuckerberg extended an invitation to Rishabh Agarwal to join the company’s new AI research division, promising him a multi-million dollar compensation package in stock and salary.
Zuckerberg’s vision for this new lab was to forge ‘superintelligence’—a technology he believed could one day surpass the capabilities of the human brain. Despite the uncertainty surrounding its creation, he pressed Dr. Agarwal to take a leap of faith, emphasizing that in a rapidly evolving world, the greatest risk is often taking no risk at all.
However, Dr. Agarwal, already a Meta employee, chose to decline the offer and instead joined an entirely different company.
Dr. Agarwal is one of over 20 researchers who, in recent weeks, have departed their high-profile roles at Meta, OpenAI, Google DeepMind, and other major AI initiatives. Their destination: Periodic Labs, a fledgling Silicon Valley startup. Many of these experts have foregone tens, if not hundreds, of millions of dollars to embark on this new path.
While established AI labs pursue abstract ambitions like ‘superintelligence’ and ‘artificial general intelligence,’ Periodic Labs is charting a more defined course. Their focus is on developing AI that can accelerate tangible scientific discoveries across fields such as physics and chemistry.
Liam Fedus, one of the startup’s co-founders, articulated their philosophy: ‘The primary goal of AI isn’t to automate white-collar tasks; it’s to accelerate scientific progress.’
Fedus was a key member of the small OpenAI team responsible for creating the groundbreaking online chatbot ChatGPT in 2022. He left OpenAI in March to establish Periodic Labs alongside Ekin Dogus Cubuk, formerly of Google DeepMind, Google’s leading AI research arm.
Several prominent AI companies are already engaged in projects aimed at speeding up scientific discovery. For instance, two researchers at Google DeepMind recently earned a Nobel Prize for their work on AlphaFold, a project that offers significant, albeit incremental, advancements in drug discovery.
Industry leaders frequently assert that large language models, the sophisticated technologies powering chatbots, are on the cusp of delivering monumental scientific breakthroughs. OpenAI and Meta, for example, claim their technologies are already making strides in areas like drug discovery, mathematics, and theoretical physics.
‘We firmly believe that advanced AI can propel scientific discovery forward at an unprecedented pace, and OpenAI is uniquely positioned to lead this charge,’ stated OpenAI spokesperson Laurance Fauconnet.
However, Fedus challenges this assertion, arguing that current approaches by large companies won’t lead to genuine scientific discovery. He criticizes Silicon Valley for being ‘intellectually lazy’ in its long-term vision for large language models. Fedus and Dr. Cubuk envision a return to an earlier era of tech research, reminiscent of Bell Labs and IBM Research, where physical sciences were central to their mission.
The AI systems powering chatbots like ChatGPT are known as neural networks, a name inspired by the intricate web of neurons in the human brain. These systems analyze vast quantities of text data from the internet, identifying patterns that enable them to emulate human language. They can even learn to write computer programs and solve complex mathematical problems.
Yet, Fedus and Dr. Cubuk contend that no matter how many textbooks or academic papers these systems digest, they cannot truly master scientific discovery. To achieve this, they believe AI technologies must also learn through hands-on physical experiments in the real world.
‘A chatbot cannot simply reason for days and arrive at an incredible discovery,’ Dr. Cubuk remarked. ‘Humans don’t achieve that either. They conduct numerous trial experiments before stumbling upon something remarkable—if they do at all.’
Periodic Labs has secured over $300 million in seed funding from venture capital firm a16z and other investors. The company has begun its operations at a San Francisco research lab, but plans to establish its own facility in Menlo Park, California, where physical robots will conduct scientific experiments on an unprecedented scale.
The company’s researchers will meticulously design and oversee these experiments. As the experiments unfold, AI systems will meticulously analyze both the methodology and the outcomes. The ultimate aspiration is for these systems to autonomously learn and initiate similar experiments.
Just as neural networks acquire skills by identifying patterns in massive text datasets, they can also learn from diverse forms of data, including images, sounds, and physical movements. Furthermore, they possess the capacity to learn from multiple data types concurrently.
For instance, by analyzing a collection of photographs alongside their descriptive captions, an AI system can grasp the relationships between visual information and linguistic descriptions. It can learn that the word ‘apple’ corresponds to a round, red fruit.
At Periodic Labs, AI systems will integrate knowledge from scientific literature, direct physical experimentation, and iterative efforts to refine and enhance these experiments.
For example, a robot might perform thousands of experiments, combining various powders and materials in an endeavor to synthesize a novel superconductor—a material with the potential to revolutionize electrical equipment.
Under the guidance of the company’s scientific team, the robot could select promising powders based on existing literature, blend them in a laboratory flask, heat them in a furnace, test the resulting material, and then repeat the entire sequence with different combinations.
After rigorously analyzing this extensive process of scientific trial and error—identifying the patterns that lead to successful outcomes—an AI system could, in theory, learn to automate and significantly accelerate similar experimental endeavors.
‘It won’t achieve the discovery on the first attempt, but it will iterate,’ Dr. Cubuk explained. ‘Through countless iterations, we anticipate reaching breakthroughs much faster.’
AI researchers have explored similar concepts for many years. However, the immense computing power and other essential resources required to undertake such a colossal effort have only recently become accessible.
Nevertheless, developing this kind of technology remains an extremely challenging and time-intensive undertaking. Developing AI in the purely digital realm is considerably simpler than working within the complexities of the physical world.
‘Is this going to cure cancer in two years? No,’ commented Oren Etzioni, the founding CEO of the Allen Institute for AI. ‘But is it a sound, forward-thinking investment? Absolutely, yes.’