Surely you are serious, Mr. Feynman
“What would our librarian at Caltech say, as she runs all over from one building to another, if I tell her that, ten years from now, all of the information that she is struggling to keep track of— 120,000 volumes, stacked from the floor to the ceiling, drawers full of cards, storage rooms full of the older books—can be kept on just one library card!” - Richard Feynman, There is Plenty of Room at the Bottom (lecture)
This is what we are building at BioCompute and it’s incredibly fascinating that the physicist Richard Feynman predicted this in 1959, nearly two decades before the first personal computer came into the market and the first DNA sequencer was invented (interestingly both of these things happened in the year 1977).
Imagine how many people in the audience might have rolled their eyes at Feynman as he spoke about miniaturization into the nano and pico scales, modulating the world at the atomic level and the need to draw from cellular biology to understand how we can pack so much data into so little space (we now know that 200 petabytes, i.e. 200,000 TB of data can go into a gram of DNA). That was probably the first time in human history that all of these exciting ideas and their confluence were spoken of in the same room, let alone by the same person. The best part is Feynman didn’t care, he was just having fun (a philosophy I am now trying to embrace as I step into real-ifying sci-fi).
Here are three insights from Feynman’s lecture ‘There is Plenty of Room at the Bottom’ that excite me:
Miniaturization is the key to better computation
Feynman invites the audience into a thought experiment about simulating the human brain using a computer, and compares how much space and energy each of these consume. For a computer (that was the era of PDP-1s, by the way) to perform the exact same tasks as a human brain, it would require too much material (P.S. we are now nearly running out of silicon), would consume too much energy (data centres today contribute to roughly 2% of the world’s total energy and is all set to grow given the AI-boom) and would need a lot more time because the data needs to be moved across a finite distance which would by the laws of physics take up finite time.
“The information cannot go any faster than the speed of light—so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.”
And this is exactly what we have gone on to do as personal computers became faster. Now companies like Lightmatter are leveraging photonics to make chips faster and more AI-compatible (and I believe capable of powering quantum computers in the future). Given the failure of Moore’s law we need alternate methods to write, read and store data and it’s exciting to be building a company at the cusp of this paradigm shift.
If you want to read something, you can read it.
Feynman talks about a hypothetical (but entirely plausible) scenario where you store all of the content from an Encyclopaedia Brittanica on the head of a pin. Now how do you read this? One way to do this would be to use an electron microscope. But back in the day, ‘electron microscopes were 100 times too poor’ and people argued (‘using theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so’) that the resolving power at that time was at its theoretical maximum.
Here is where Feynman chips in and points out that the theoretical maximum hinges on certain key assumptions, one of them being the need for a symmetrical stationary field. What if we scrapped that assumption? We could build electron microscopes good enough to read the encyclopaedia printed on the pin head.
There is an underlying philosophy that stands out in this example- building tools with exponential capabilities is possible, provided we know that the targets we want to achieve are not beyond the physical constraints of the universe and that we are willing to challenge some fundamental mental models. This belief is the cornerstone to enabling biocomputation in the near future.Entering the Quantum Realm
Feynman pointed out that miniaturization is not as simple as taking a big object and making extremely small replicas of it, for example a large magnet made of millions of tiny domains is very different from a single tiny magnet because the laws of physics operate differently in the macro and quantum worlds. He believed that being able to manipulate matter at the atomic level would enable us to design new materials, build circuits (which is what we are doing with quantum computers) and then control chemical synthesis atom by atom, like what happens at a high level in a biological cell (during ATP synthesis, respiration etc.). Tapping into atoms enables us to utilize a big chunk of the room at the bottom, while also making our lives easier.
This is where I envision an intersection of biomolecules and quantum computing. Quantum computing requires some degree of randomness, which is similar to Brownian motion, optical rotation and isomerism (and the cool nature of these compounds existing in one isomer or the other at random points in time) seen in biochemistry. Could we then use biomolecules to build quantum hardware? Or conversely use quantum devices to design and print biomolecules?
If you are a quantum-realm enthusiast, please share some interesting reading material across; let’s catch up and talk about what you think about the role of biomolecules in driving the super computers of the future. The coffee (or tea or milk(shake)) is on me!