On digital twins, precision medicine, and what happens when we apply machine mathematics to systems that are not machines
There is a quiet crisis at the center of two of the most ambitious scientific programs of the 21st century. The first is precision medicine — the project of tailoring treatment to the specific biology of the individual patient rather than to population-level averages. The second is planetary stewardship — the project of managing human interactions with the Earth's climate and ecosystems with the kind of precision that their fragility demands. Both programs are underway. Both are producing results. And both are running into the same wall, for the same reason.
The mathematics being used to build them was not designed for the systems they are trying to model.
The machine problem
When engineers build a digital twin of a jet engine, the mathematics is straightforward in principle: model the components, specify their interactions, couple them through well-defined interfaces, and simulate. The jet engine is a machine — closed, decomposable, purpose-built. The mathematics of machines works because machines are designed to be modeled.
A human body is not a jet engine. Neither is the Earth. Both are what biologists and systems theorists call living systems — open to their environments in ways that continuously alter not just their state but their operational structure; defined by networks of relationships rather than by the intrinsic properties of their components; capable of generating qualitative novelty that was not present in their initial conditions; and fundamentally constituted by context in ways that make the same input produce entirely different outcomes depending on where and when it arrives.
The digital twin programs for both the body and the Earth have inherited the mathematics of machines. They decompose their subjects into components, model each component with differential equations, and couple the components through interface conditions negotiated by specialists from different disciplines. This works well under calibration conditions — in regimes close to those used to build the model. It fails systematically at the boundaries: at tipping points, at disease bifurcations, at the windows of therapeutic or ecological leverage where the most consequential decisions have to be made.
"The failure is not computational. Both programs have access to enormous computing power and richer data than ever before. The failure is mathematical — a mismatch between the formalism and the system."
Three frameworks, three dimensions
My new white paper, Towards a Mathematics of Living Systems, argues that three mathematical traditions together begin to define what an adequate mathematics of living systems must look like — each addressing a different dimension of the problem that machine mathematics cannot handle.
Chaos theory provides the tools to represent instability and qualitative transition. Its formal apparatus — phase space analysis, Lyapunov exponents, bifurcation theory — tells us where a complex system is most sensitive, where prediction breaks down irreversibly, and where the thresholds lie between qualitatively different regimes of behavior. For a body twin, this means formally representing the bifurcation dynamics of cardiac arrhythmia, epileptic seizure, and cancer progression. For an Earth twin, it means mapping the proximity of the current climate system to its tipping points — the AMOC collapse threshold, the Amazon dieback transition, the ice sheet destabilization boundary. Chaos theory is indispensable for identifying where the dangers are. Its limitation is that it does not tell us how to act in response.
Applied category theory provides the tools to represent the objects and relationships of a complex system with enough precision that models from different disciplines can be formally assembled without conceptual error. In current multi-component models — physiological or Earth system — sub-models developed by specialists in different fields are coupled through informal conventions. This works near calibration conditions and fails under extrapolation, because informal conventions carry no mathematical guarantees about behavior in novel regimes. Category theory makes the interfaces formal, the type constraints explicit, and the compositional rules auditable. A protein concentration and a cell count are not the same kind of mathematical object, even if both are dimensionless numbers — and treating them as interchangeable introduces errors that propagate silently through the model and compound when conditions change.
Generative dynamics addresses the question that the other two frameworks leave unanswered: how do interventions actually propagate through living systems, and why do some inputs cascade into transformative change while others dissipate without effect? The framework identifies four mechanisms — cross-domain feedback loops, temporal leverage, network effects, and contextual amplification — that together explain the intervention propagation problem. Contextual amplification is the most consequential for precision medicine: the same drug molecule does not produce the same effect in different patients, not because of differences in its mechanism, but because the genomic, metabolomic, and immune context constitutes what the drug becomes. This is not a statistical nuance — it is the central mathematical challenge of personalized therapeutics.
The cost of the gap
Roughly 85–90 percent of drug candidates that demonstrate efficacy in preclinical models fail in clinical trials. A substantial proportion of those failures trace to the same source: preclinical models treat drug effects as context-independent, and real patients present a range of biological contexts that constitute different transformation environments. This is not a data problem — more patient data fed into the same mathematical framework will not close the gap. It is a mathematics problem.
The same logic applies to planetary stewardship. The Earth system contains leverage points — places and moments at which interventions produce disproportionately large and lasting effects — but current Earth system models have no formal framework for identifying them. Conservation and climate policy is therefore conducted without reliable guidance about which interventions compound and which merely buffer. Temporal leverage and cross-domain feedback loop mathematics are the formal basis for answering that question, and their absence from current Earth system modeling is a direct limitation on the precision of everything we are trying to do.
A research program, not a finished theory
The paper does not claim to have solved these problems. What it argues is that the shape of the solution is visible, that the components exist in fragments across three mathematical traditions, and that integrating them is both conceptually motivated and practically urgent. The synthesis requires, at minimum, a dynamical categorical framework that allows formal structure to evolve over time; a rigorous theory of temporal leverage that connects bifurcation geometry to developmental irreversibility; a contextual type theory in which what a morphism does depends on the global configuration of the system; and a compositional dynamics that combines all three.
That is substantial open mathematical work. But the case for doing it is the case for precision medicine and planetary stewardship themselves — two projects that the 21st century cannot afford to get wrong, and that the mathematics of machines cannot get right.
The full white paper — including a systematic framework comparison, detailed treatment of each mechanism, mathematical directions for future research, a 32-term glossary, and an annotated bibliography of 43 works — is available here: Towards a Mathematics of Living Systems.