Life at the Fringe
This is an attempt of mine to connect the things I studied, which were mostly Quantum Mechanics and Thermodynamics, with something that interests me a lot, which is artificial intelligence, on a a very basic, philosophical level. It's a very difficult topic and I can't be sure that I got everything exactly right, but I am confident about the general message. However, many things mentioned here may still need to be properly researched and references need to be cited.
Firstly, let me tell you why I think it is important to know what life is from a thermodynamic perspective: In our day and age we are designing complex mathematical models, which, when trained on the unbelievably large amount of data which we produce every day, are able to make surprisingly accurate predictions about certain things. And then we go ahead and give them names as crazy as artificial intelligence
. And though they are still quite far from reaching a complexity which I would truly grant the title Intelligence
to, they are on their way there. Imagine now that, in a future which many of us may live to see, we may design such a system, which will incorporate all the functions and analyses required to efficiently structure the human life. This is not a discussion of whether such a future would be good
or bad
, or such things as desirable
. It's just quite likely that such a thing may happen. Typically, these AIs need to be trained to perform well on the problem at hand. Which means that a good outcome
or an accurate prediction
needs to be very well defined. Terms like the loss function
may come to the mind of the initiated. So I ask you: what should the AI designed to organise human life be trained for? What is the goal? What is the measure we hope to optimize, so life in this universe will be enjoyable?
I can't say much about the enjoyability of it, but I think I can give some hints on where the general road of life has been going since it emerged, hoping that this will provide some valuable pointers.
Okay, to the thermodynamics of it: Generally, what we've learned about the world is that only such things happen which increase the total entropy of the universe. Entropy is a difficult concept to grasp. I'm not sure I fully got it myself, but here's what I think is a very good way to think about it: we are taught that systems try to
minimize their energy. A ball will drop to the floor, negative charge will approach positive charge, a spring tries to stay in it's equilibrium position. But we are also taught that the total amount of energy in the universe is fixed. So, if everything in this universe is trying to get rid of energy, but no one has a place to dump it, what should happen? Well, it's going to be evenly distributed. How? Well, for a given temperature, the amount of energy a system will be assigned is given by its entropy. Very roughly speaking. You may have heard the more popular narrative of entropy being something like disorder, which is also (fairly) accurate, but gives less of an idea of the implications relevant to this narrative. If you can't bring these two things together, you know why entropy is a difficult concept.
But back to the start. So processes in this universe generally occur if they reduce a systems free energy, which is like the energy, but with the entropy times the temperature subtracted. In short: \(F=U-TS\). That is the quantity that really tells you what goes and what doesn't, in this world.
If you want to challenge the boundary of human knowledge, you may ask a theoretical physicist to explain what life is. You may specifically want to see if life can be explained within the concepts of Quantum Mechanics and Thermodynamics. Think of Quantum Mechanics as the theory that tells you about all the possible things that could be, and Thermodynamics as telling you which of these actually are. A way this could be done (had we a near-infinite amount of computer power) would be to calculate the wave function of ‐ say ‐ a cat, and then see if those arrangements of atoms that we would recognize as a cat have the smallest free energy. If that was the case, it would mean that cat
is stable
at room temperature
. That's what we do with atoms, molecules and crystals to see whether they will continue to exist forever, or break down (over time). I promise you this: The cat will fail the test.
But if cat is unstable at room temperature, why do we continue to see cats everywhere? Because cats don't exist at room temperature! At least not only there. It's really whacky. They exist in a temperature range. In fact, they exist at the fringe between about room temperature (293.15 K, 20 °C, 68 °F) and about 5800 K, 5526.85 °C, 9980.33 °F. The latter is the temperature of the surface of the sun. And it is the temperature you'd need to heat iron (or anything else) to, for it to emit perfectly white light. Aaaaand we're back to Quantum Mechanics.
If you study physics, you're introduced to all kinds of crazy shit that is almost impossible to reason about for a newbie, and interconnected and intertwined in ways you wouldn't have imagined. I'll introduce you to some of it now.
The surface of the sun is about 5800 K hot. At that temperature, it emits electromagnetic radiation of a broad frequency range. The frequency at which the largest portion of energy is emitted is 341 THz, and the wave length associated with this is 500 nm. Light of that wave length has approximately this color.
If you want to know why this is the case, you'll need to understand a lot of physics. The underlying phenomenon is Black-body radiation, which was really difficult to explain around 1900, until Max Planck discovered the underlying mathematics and thereby laid the foundations of Quantum Mechanics.
The point is that the temperature of the sun (or any other material or body) directly influences the amount of energy that is typically available for spontaneous processes. This energy portion is so small that we usually only notice its effect on the atomic scale, but it's also responsible for Brownian motion, the seemingly arbitrary movement of small particles, like dust in a liquid. You can calculate this energy using the formula \(E=\frac{1}{2}k_\text{B}T\), and, since the Boltzmann constant \(k_\text{B}\) is extremely small, so is this energy. But this energy is not only responsible for effects in matter, but also for the spectrum of electromagnetic radiation at a given temperature. This spectrum of thermal radiation always has the same shape, but shifts to higher frequencies and lower wave lengths as the temperature increases. This can be calculated empirically by Wien's displacement law: \(\lambda_\text{max}\), the wave length at which most energy is emitted, is given by \(\lambda_\text{max}=\frac{b}{T}\), where \(T\) is the temperature and \(b\) is Wien's displacement constant \(b=2.8977719~\text{mm K}\). We can use it to double-check the values I've mentioned earlier. Assuming a temperature of 5800 K, the formula gives us \(\lambda_\text{max}=\frac{2.8977719~\text{mm K}}{5800~\text{K}}=499.6158~\text{nm}\approx500~\text{nm}=\) .
Here comes the crucial point: On earth, if we assume an average temperature of about 20 °C or 68 °F or 293.15 K, then \(\lambda_\text{max}=\frac{2.8977719~\text{mm K}}{293.15~\text{K}}=9885~\text{nm}=\) . And black is just the absence of visible light. So this \(\lambda_\text{max}\) at earth's temperature is so large that it's in the infrared, way outside the visible spectrum.
Now we need to reverse our thinking. Temperature is what physicists would call a macroscopic
quantity. Such quantities are values that are derived from the averages of microscopic
quantities. For example, you could not tell the temperature of a single atom, because temperature is given by the average speed of many atoms. Or rather, by their average kinetic energy \(E_\text{kin}\) (in a single direction), which is given by the familiar formula \(E_\text{kin}=\frac{1}{2}k_\text{B}T\). Perhaps it would be more precise to say that \(T=\langle\frac{2E}{k_\text{B}}\rangle\), which means that the temperature is given by the average (denoted by \(\langle ...\rangle\)) of twice the kinectic energy of an atom divided by the Boltzmann constant. Similarly, instead of calculating the \(\lambda_\text{max}\) at a given temperature, we could measure it in an experiment and use it to define the temperature: \(T=\frac{b}{\lambda_\text{max}}\)!
So, if we went outside on a sunny day and decided we wanted to measure the temperature by looking at the frequencies of radiation we observed around ourselves, we would measure \(\lambda_\text{max}\) to be , which would tell us that the temperature is around 5800 K or 5526.85 °C or 9980.33 °F!
Some people would say that we simply have made a mistake and should not measure the temperature using this method. But a temperature can only really be defined in thermodynamic equilibrium, in which case we would measure the same temperature no matter which method we chose. And that really is the problem here: Since the sun and earth are so far apart, there is virtually no method to exchange energy and thus equilibriate the two, other than radiation.
Another problem is that we are in fact measuring the radiation spectra of both the sun and the earth. I haven't seen the spectrum of a sunny day, but quite likely we'll have two maxima, one at low frequency, which would come from the radiation emitted from everything around us, and one at high frequencies from the sun's radiation. So a smart thermometer might be able to distinguish these and actually tell the correct
temperature.
But what has all this to do with the question whether cats are stable at room temperature. Or, if they are not stable, how does it explain the presence of cats, which we observe every day?
What's interesting is that cats starve if they don't eat. And what they eat has eaten other things before, which may in turn have eaten other things before that and so on, until we reach the source of all edible: Plants, which consume sunlight (\(T = 5800\) K) for photosynthesis. This proves: The continued existence of cats and all life on earth is only possible with an energy source that is outside our own temperature. And while cats would not last long at the origin of that temperature and need earth's low temperature to keep from turning to plasma, they certainly can't exist without a high-temperature energy source. They exist at the fringe between the temperature of the sun and that on earth.
This leaves us unable to answer the question of whether or not cat is stable at room temperature, because we have to acknowledge that there can never be a pure
temperature in the context of life. What life forms are is a kind of replicating, moving, evolving catalyst, built around equilibriating the sun and the earth. This is the meaning of life from a thermodynamic perspective.
You might argue that this could be solved more efficiently by covering the earth in black rock, which immidiately turns sunlight to heat, but you must never make the mistake of considering thermodynamics alone: There's that b**** called kinetics that decides upon the route taken to reach equilibrium, and the velocity of the transformation. And kinetics says that plants can grow over rock, but rock can't grow over plants. Hence the plants and the ecosystem they support win.
Let's summarize this: We've set out to answer the question what an AI that governs (human) life should optimize. Not to miss anything important, we started by investigating what humans ‐ and all of life ‐ are at the very fundamental level. What we've found is that this question is difficult to answer using approaches we know from investigating chemicals and crystals, because life can not be seen as existing at a single temperature, which would be required for our models to work. Instead, we must take into account all of life's complexity, and the fact that it requires energy that ultimately originates at the sun (or perhaps earth's hot core, in some places). We therefore have to extend our view of life and acknowledge that it is existentially tied to the sun, which is very hot. We find that the role that life plays in the thermal equilibriation of the sun's and earth's surface is similar to that of a catalyst, be it a very complicated one. Now, what of this will be useful as we design our AI? Well, we've asked a very fundamental question, and we got a very fundamental answer, which turns out not to be all that new to us: In all we do, if we wish to continue to exist, we must make sure that whatever we do does not exhaust the resources that we have at our disposal. We should therefore try to maximise the amount of energy we can harvest from the sun and minimise the amount we have to spend to achieve our goals. Clearly, this is not new to us, and there are many more requirements which we need to fulfill for all of life to work, but this is the most basic rule of them all, and one that is simple enough so that it may be possible to teach it to an AI.