When Physics meets Biology


Luca Mazzucato

William Bialek, from Princeton University, is the distinguished speaker for the second Della Pietra Lecture Series. He is one of the pioneers in the field of Biophysics and his achievements span from Computational Biology to Neuroscience and Information Theory. Setting out to explain to a packed crowd of physicists and mathematicians what this cross-cutting discipline is about, it is fair to say the audience left with their jaws dropped.

Are we at the dawn of a new era in biology? Your success in giving quantitative explanations of notoriously complex biological problems suggests a new way of looking at life science…

Physics and Biology have a long standing relationship of love and awe. The usual objections physicists would raise is that biology, with all its tiny bits of flesh, blood, and chemicals, is fundamentally messy. It was not a long time ago when the only estimates you needed to infer were up to factors of two. In my lectures, I hope I convinced you that the real story could be very different – in fact, we already now know it is. Not only did it become clear that biological processes the most diverse, from gene regulation to bacterial behavior, require a high degree of precision. It is also true that we can understand many aspects of biology using first principle physical approaches. Statistical mechanics and information theory recast what seems at first a dirty business into a precise mathematical framework – and give predictions. With 10% accuracy.

What is driving this new paradigm shift?

We are in a new data-driven era, in which new experiments are finally possible, that we could only dream of a decade ago. For the first time. we can keep track of the simultaneous behavior of hundreds of neurons, or hundreds of birds in a flock, or hundreds of cells inside a fruit fly egg. We have recombinant DNA techniques that switch on and off single genes one at a time, attaching a fluorescent dye to the genes we express. Which gives us a lot of cool pictures too! The experimental breakthroughs give us the chance to finally put to the test theories that were once regarded as mere metaphorical descriptions, at best.

What kind of ideas from Physics carry over to Biology?

It was the early 80s when John Hopfield suggested that networks of neurons in the brain could be modeled in guise of a magnet, or a spin glass. In his idea, the analogy with the spin glass could serve as an inspiration to attack the topic of associative memory. This idea has always been in the back of our minds, as we developed models of neuronal interactions, but hardly anybody took it literally. One of the reasons this was just a metaphor is that, in statistical mechanics, your formulae are applicable only when you have a very large number of elements in the game – the so-called thermodynamic limit. Most of what we know about neurons in the brain has been found by recording one neuron at a time. It’s sort of remarkable that we got anywhere, given that there are a hundred billion neurons up there. Recording just a few them is very far from what you would need if you set out to test the thermodynamic limit of the theory. But in the last few years experimentalists went a very long way. In our current experiments, we are using a micro-electrode array whose recording units are so closely packed together that they can reach every single neuron in a tiny area. Two hundred neurons overall – still not there yet, but much closer to the thermodynamic limit than just five.

What kind of experiment?

There was a push over the years to record more neurons at the same time. Here we take a slice of the retina, containing about two hundred neurons, and we put it on an array of electrodes. It is a salamander retina. This slice is responsible for seeing a little patch of the world, if you are a salamander. At the beginning, I was interested in checking how well I could approximate the correlations among real neurons by using a statistical mechanical model, which is called the maximum entropy principle. This was inspired of course by the Hopfield model. I was initially looking for discrepancies between the maximum entropy model and the measurements from real neurons. But the more we looked for differences, the more we realized there were none! So we concluded that the maximum entropy model, with its neurons as little magnets pointing up or down, gives accurate predictions of the correlations among neurons in the salamander retina. It is getting close to what physicists would dare call a “theory”!

The new paradigm you are testing is the fact that living systems are not just described by statistical physics, but they are in fact at a critical point. What is criticality in life science?

Quite amazingly, the same maximum entropy construction, borrowed from statistical physics, can be used to explain very different phenomena of life. Among them, the firing patterns of neurons in the retina; the fluid motion of a flock of birds. The crucial ingredient is the collective dynamics of a large number of units: neurons, birds, what have you. Now, if the world can be described by statistical physics, the next question is: Where are we in the phase space of this model?

We are at a critical point in the space of parameters. This is the incarnation of an idea that has been floating for decades in between life sciences and physics: self-organized criticality. The idea arose not too far from here: Per Bak was at Brookhaven when he wrote his influential papers and his book “How nature works”, proposing that living systems are poised at a critical point. This was marvelous and provocative, and not terribly successful. For many years, because we could not reach the level of experimental precision required to test it, this idea remained just a curiosity. It was more of a metaphor than a theory. The self-organized criticality idea died of a whimpering death, not because somebody checked and it was wrong, but because it was never clear what quantity you would compute from the data that would test the theoretical idea. What changed since then is that we can finally understand what is required for the theory to be checked. And it works. The next question is: are we really standing at a critical point or just very close to it?

We’ll stay tuned to find out! Let’s go back to the physicist/biologist dilemma. You started out your studies picking physics. When did you get interested in biology?

I knew I was going to work in biology since the very beginning. I would sit in physics seminars and be amused by mathematics and the formalism the speaker would use to attack a certain problem, but then I’d get bored and think: “Am I really going home and do this computation?” I guess I was not thrilled by the specific topics at hand, but fascinated by the problem solving mentality and the tools used to solve those problems. At the same time, I would sit in biology seminars and be carried away by the sheer beauty of the questions asked and the grand scheme of things. On the other hand, the messiness of the details would leave me unsatisfied. I guess you could say that I took the best of both worlds! Or, maybe, that I misunderstood both.

We have seen that the relationship between physics and biology flows definitely from the former to the latter. But is it reciprocal?

I think biology contributes a great deal to physics, but in a different way. It helps us figure out which are the relevant questions and it leads us to a sharper understanding of the methods we use. Let me give you the example of protein folding. We have a statistical model with many parameters, and physicists like to study these system for very large or for very small values of these parameters. However, because we are applying this particular model to biology, we need to understand it in a completely different range: it only works for protein folding if parameters are at an intermediate range and it unveils new properties that we wouldn’t even imagine before. Just before we didn’t know where to look. An example of this interaction is on the arXiv. What used to be known as the “Statistical mechanics and disordered system” section is now called “Disordered systems and neural networks”. This shift is remarkable and in a sense it’s telling us that this is all just one big field.