News & Press Blog Post

The next internet moment for science

AI is transforming science into a scalable, parallel process: automating experiments, analysis, and discovery to accelerate knowledge by orders of magnitude.

By Nick Edwards, PhD

Science is hard. It breaks people. As a third year PhD student, none of my experiments were working. At one point, my advisor even asked me if I was sure I wanted to stay in science.

Fortunately, I pushed through. Over the next two years, a series of miraculous events happened. I took up a completely new project, mastered patch-clamp electrophysiology, finished my dissertation, and published a paper in Nature Neuroscience .

This is hardly a unique story for early career scientists - trust me, I've interviewed over a hundred scientists on my podcast . I've thought a lot about why those first few years of grad school are so rough. What are the lessons that made my turnaround possible, and how do they apply to scientific research in general?

Science is not scalable

The fundamental problem holding back scientific productivity is that science is limited to the capabilities and scale of individual scientists. It takes multiple iterations to run even a single, well-controlled experiment. We try to improve efficiency by working together in organizations, but this doesn't scale well.

As humans, we are also constrained in how much knowledge we can acquire in a lifetime, causing us to become progressively more specialized. We dig deeper and deeper into a very defined set of literature and tools. It's impossible to stay on top of highly relevant information and methods, let alone all the things tangentially relevant (an interdisciplinary view where many important discoveries come from).

In the 1980s, the "World Wide Web" was invented to overcome the hyper-specialization inherent in science. Scientists at CERN wanted the ability to easily collaborate across disciplines. What they built not only enabled scientific collaboration, but fundamentally changed society.

I believe we're at the next internet moment in terms of scientific discovery, marked by parallelization of experiments and unprecedented collaboration.

Old dog, new tricks

Science moves forward along multiple axes. On one front is methodology: the tools and techniques we've created to investigate natural phenomena. From Galileo's early telescopes to gene editing, technology has made it possible to better observe, model, and understand the world. This is accelerating rapidly. Over the last decade, we've seen significant advancements in frontier AI models, structured scientific environments, and lab robotics.

Meanwhile, AI has led to virtually no changes in epistemology, or how we interrogate the world. The scientific method remains the same. AI opens the potential for not only dramatic advancements in measuring and interrogating data, but also the ability to scale up the use of the scientific method. It's now possible to create autonomous systems that can generate, test, and refine scientific hypotheses.

The future

What we really want is an operating system for science. Operating systems allow you to run tasks in parallel. This enables fast feedback loops and the ability to multitask, two fundamental challenges holding back progress in biology. We've slowly adopted lab robotics to address this, but it's difficult to design experiments that scale and fully close the loop. A scientific operating system includes access to the right datasets and tools to make new discoveries and facilitates collaboration between humans and AI scientists.

We define an AI scientist as a tool, or maybe a technique, for automating the mechanics of discovery. It operates. We believe that automation leads to speed and scale, and that scale can accelerate science by an order of magnitude. The AI scientist does not change our ways of knowing. It still needs the same elements as a human scientist: building on the foundation of previous research, producing experimental data, and analyzing the evidence through statistics.

An AI scientist still reviews literature. An AI scientist still writes experimental protocols. It still needs to work in the wet lab. It still needs to analyze data. But it can read, assess, and pattern match across millions of papers in a way that is not tractable to a human. It can do hours of work with minimal intervention. And it can run down dozens of avenues for exploration simultaneously.

The inputs for AI scientists can be hypotheses, data, and vague ideas. The outputs are experimental data, interpretations, and next steps. In this world, the AI scientist is enabling human collaborators to make discoveries, but also making discoveries itself.

Trust, reproducibility, and interpretability

At this stage, machines are much more fallible than humans. Distrusting humans is already a key part of how we do science. We use peer review, replicates of studies, reproducible methods, and controls to properly weight "discovered" knowledge. We've also learned that there's tremendous value in working with collaborators to check our work, to inspire new ideas, to help us better match evidence to conflicting models of the world.

We should continue to be skeptical of both humans and machines. The future of science is an active collaboration. AI shouldn't replace scientists; it should collaborate with us and scale our judgment.

Why it matters

The ideal upside of automation is the compression of research timelines, exponentially accelerating scientific discovery. A 10x increase in throughput, compounded recursively, compresses a century of knowledge production to well under a decade. The world of the year 2150, but in 2035.

We're still in the early days. But if we get this right, science itself starts to look different: not a sequence of isolated projects, but a continuous, massively parallel process of discovery. That's the very exciting future an AI Scientist points us toward.

Contact Us

Interested in piloting Potato? Have a partnership idea?

We'd love to hear from you. hello@potato.ai