Dealing With Complexity in Clinical Diabetes
The value of Archimedes
The value of Archimedes
In this issue of Diabetes Care, two articles by Eddy and Schlessinger (1,2) describe a new type of mathematical model of diabetes known as Archimedes. Many readers might think that mathematical models are too abstract or simplistic to have any relevance to real clinical practice. Others may be skeptical because they have difficulty understanding how the model can do what the articles describe, and may be tempted to dismiss it as a black box.
Both of these responses would be natural and understandable, but incorrect. The significance of this new model, and the potential it can provide for helping us improve the quality of diabetes care, is quite remarkable. For the first time, there is a model written at the level of biological and clinical detail at which clinicians make daytoday decisions. And for the first time, there is a model that has been validated against real clinical trials, and many of them.
To truly appreciate the potential value of Archimedes, it is necessary to address the two main sources of skepticism. First, why are mathematical models needed at all? And second, why is this particular model so special? The answers to both these questions rest on a single issue—complexity.
Diabetes care is complex. If all we had to do was lower blood glucose levels in everyone with the disease, the answer would be clearcut and simple. But when we begin to consider the risk of hypoglycemia, the side effects of medications, and the uncertainties of “adherence to treatment,” decisionmaking becomes more difficult. Now add hypertension, dyslipidemia, and obesity, many or all of which are found in most people with diabetes, and the treatment options become even more complex. Next add age, race, gender, duration of disease, life expectancy, quality of life, comorbid conditions, and all of their treatments, and you’ve got yourself, well, a very complex clinical problem. Yet virtually all patients with diabetes reach this level of complexity. Somehow we must deal with it.
It would be great if there were clinical trials that addressed every possible combination of the factors listed above, and told us what to do and what to expect when presented with any given patient. But of course that is not feasible. Such an array of trials doesn’t exist and never will.
Unable to refer to clinical trials, we have little choice but to rely on “clinical judgment” or “expert opinion.” We do this in many ways. At every office visit, health professionals face their patients; gather information about signs, symptoms, and history; mentally review all of their knowledge and experience; and make decisions. Or groups of clinicians might discuss a case at grand rounds and reach a consensus about what to do. Or a committee of experts might sit around a conference table and formulate a clinical guideline, sharing their experiences and knowledge.
Whether we do it individually, in informal groups, or in committees, there is an assumption that we are able to accurately remember, understand, and integrate all of the important factors—such as the details of the individual case, the relevant epidemiological studies of risk factors and disease progression, the basic science related to the disease, the results of many clinical trials, our personal experiences, and more—and all in our heads. Unfortunately, for complicated decision making, the abilities of the human mind are hopelessly outmatched by the complexities of the problems we face. At best we can handle only a very limited number of factors in our heads. Just trying to keep up with published clinical information is impossible; there’s only so much the human mind can absorb and integrate. The result of all this is that we tend to work with highly simplified and incomplete pictures, and we often get wrong answers.
To help appreciate the problem, imagine the following patient: a 57yearold woman with a BMI of 27 kg/m^{2}, an LDL cholesterol of 160 mg/dl untreated, and an 8year history of type 2 diabetes currently controlled to an A1C of 8.5%. She also has occasional transient ischemic attacks, a father who had angina and died of a stroke at age 50, and (add your own details). Now have a colleague estimate the chance that that person will have a heart attack before she gets to 65, and compare that person’s answer with yours.
The net result of clinical judgment and expert opinion, and the ultimate proof of their limitations, is that they often produce highly variable, often conflicting conclusions. Even worse, because thinking is inherently implicit, it is very difficult to dissect our thought processes to compare the paths that led different individuals to different conclusions. If we were to ever subject clinical judgment or expert opinion to the validations we demand for other serious methods for making clinical decisions (like mathematical models), they would undoubtedly fail.
If it is not feasible to conduct trials for every clinical problem we face, and if our attempts at thinking aren’t very reassuring, what else can be done? The answer is that there are ways to formally integrate the results of clinical trials and other studies to extend and support our thinking. This approach stems from the fact that thinking is actually a form of modeling. With thinking, the resulting model is a mental model—a mental picture or representation of what we think is reality. For most of our daytoday living, thinking does just fine. But our thought models get in trouble when a problem becomes too complex and the knowledge base gets too deep. This is where other types of models can help.
In particular, mathematical models have been developed that can handle problems of great complexity, like landing a space probe on Mars, keeping track of millions of phone calls from one side of the earth to the other, or a bit simpler—scheduling an airline’s fleet of planes to utilize resources most efficiently. If properly constructed and validated, mathematical models can strengthen our ability to predict outcomes and make good decisions.
So what’s so special about the Archimedes model described in this issue? The answer is best appreciated in the context of two other popular types of models: regression equations and statetransition models (e.g., Markov models).
The regression model perhaps most familiar to clinicians is the Framingham risk score (3). Using seven variables (age, sex, LDL, HDL, blood pressure, presence of diabetes, and smoking) the equation can calculate the 10year probability of a myocardial infarction. This model has been enormously helpful and is widely used and respected (4). Yet as good as it is, it has several important limitations. First, the database used to construct the model contained very few people with diabetes, and concerns have been raised regarding its accuracy for this population (5–7). But more apparent is that it does not include many factors that everyone would agree are important, such as the duration of hypertension or dyslipidemia; or the presence of symptoms such as angina; or the effects of treatments, family history, or other comorbid conditions and their treatments. Even a past history of an MI is missing from the model. An additional problem is that the current model calculates a 10year probability. What about the chance of an event in 5 years? All that notwithstanding, a regression model like the Framingham equation can do things the human mind cannot, like integrate information on the seven variables it includes. But like the human mind, it is quite limited.
A statetransition or Markov model can provide a more realistic representation of a clinical problem (8–11). Briefly, it postulates that a person can be in one of several “states” of a disease and, at defined intervals, can move from one state to another as determined by “transition probabilities.” The interval between transitions is usually fixed and set at 1 year. Treatments can be included by postulating their effects on the transition probabilities. The expected utilities or costs of being in a state or making a transition to another state can be added to the model to help calculate those outcomes.
Markov models are appealing because the concept of clinical states and transitions is easy to grasp and corresponds to the way we often simplify complex problems in our minds (e.g., “mild,” “moderate,” and “severe” hypertension). Another virtue is that it is easy to draw a diagram of the states (usually as circles) and the transitions between them (as arrows). This helps clinicians visualize the structure of the model, and creates a sense of understanding (although it can still be impossible to reconstruct the model from the information provided). A third virtue is that the mathematics used are relatively simple. Because of these features, Markov models have frequently been employed to estimate the effects of interventions, and there are many reports in the diabetes literature using this type of model (12–17).
Unfortunately, Markov models also have important limitations. The most obvious is that the way the model represents diseases is not the way reality works. Virtually every biological variable, like systolic blood pressure, is continuously valued, not lumped into a small number of discrete states. Time, and disease progression, do not move in annual jumps. States like “angina” or “cardiovascular disease morbidity” are enormously simplified representations of what is really going on when these events occur; a state transition model essentially assumes, for example, that all heart attacks can be lumped together. In general, the model also assumes that the prognosis of a patient in a given state is independent of events prior to arriving in that state. Furthermore, a transition probability is a simplistic representation of months of complex clinical management.
Additional problems arise from the need to have accurate transition probabilities for all possible pairs of states. Setting aside the fact that annual transition probabilities between artificially defined states are rarely collected in clinical research, the sheer number of transitions explodes exponentially as more states are added. This puts a tight limit on the number of states that can be included in a Markov model.
Although no clinician who looks carefully at the diagram of a Markov model would ever say that it captures most of the factors they consider essential to their own thinking, that does not mean that a Markov model has no valid use. They can be very helpful for problems for which all the transition probabilities are well known, and the necessary simplification of disease progression is not important. But a realistic clinical problem is far beyond their reach.
Given their limitations, a fair question is how accurate are Markov models in simulating reality? The answer is not encouraging. No Markov model has ever been subjected to the validation exercises described for the Archimedes model (2). The closest they have gotten to an independent validation is to make cross comparisons. When, for example, two wellknown Markov models of diabetes were handed identical patient descriptions and asked to estimate the chance of predefined outcomes (e.g., myocardial infarction, microalbuminuria) they gave very different answers (18). Even more disturbing is that it was impossible to determine which one, if either, was “correct.”
Ironically, although recent modeling guidelines recommend internal testing and debugging, predictive validations that would compare a model’s results to real results were explicitly discouraged (19). Perhaps acknowledging the limitations of the current state of modeling, the task force generating these guidelines stated that it “is inappropriate to demand that models be validated prospectively.”
This is the stateoftheart into which Archimedes enters. Although the Archimedes reports speak for themselves, it is worth highlighting some of the main features that distinguish this model from all others. First, it was built using an entirely different type of mathematics: differential equations. Second, it is programmed using an entirely different language, called objectoriented programming. Together, these enable Archimedes to handle a virtually limitless number of variables, and to preserve the continuous nature of biological phenomena and time. They also enable the Archimedes model to include the underlying anatomy, physiology, and pathology relevant to diabetes, as well as signs, symptoms, tests, treatments, and most outcomes important in diabetes care. The model is also remarkably broad, including such things as patient and provider behaviors, care processes, system resources (e.g., facilities, personnel, equipment, supplies), utilities, and costs. All told, the virtual world in Archimedes is impressively complete.
What may be unsettling about Archimedes, however, is that its mathematical “infrastructure” will be difficult to grasp. As described above, to adequately address the enormous complexity of diabetes and its care, Archimedes uses advanced mathematics. Although the equations for the pathophysiology of diabetes are given in one of the articles (1), and the mathematical foundation of the model is described in another (20), the equations underlying Archimedes will be understandable only to readers who have had extensive mathematical training. Unfortunately, there is no way to represent the complexity of human biology and disease with simple formulas, states, or transition probabilities. This fact will necessarily limit the transparency and accessibility of the model to nonmathematicians. But the difficulty in understanding how Archimedes works should not be held against it. The mathematics of a computed tomography scan, or the programming inside a glucose meter, are also beyond the reach of virtually all clinicians, yet that does not limit our confidence in their accuracy or utility.
Nonetheless, the authors have provided a detailed description of the structure, variables, and assumptions they used. They have also indicated that they intend to make the model widely available, and in complete detail, on a nonprofit basis through a website, so that it can be used (and tested) by anyone. This too, has not been done before with other complex models, and thus Archimedes will be easily accessible to math aficionados and clinicians alike.
Even though the mathematics in Archimedes is beyond most readers’ training, there are two points that should be emphasized. The first is that Archimedes is not derived from a database, like a regression equation. The second is that no curvefitting techniques were used to relate inputs to patient outcomes. The equations in Archimedes reflect the relationship between many variables, elucidated from basic science studies, epidemiological studies, and clinical trials. The calculation of any particular outcome is the result of dozens of equations all working together.
Given that the Archimedes model is so complex, and given that its insides are not very accessible to nonmathematicians, it is reasonable to ask why anyone should trust it. This question leads to another exciting aspect of Archimedes, and that also distinguishes it from all others. In their second article (2), Eddy and Schlessinger take the unprecedented step of testing the model by comparing its results to the results found in actual clinical trials. In all of the validations (74 separate comparisons involving 18 different trials), the correlation between Archimedes and the real studies was 0.99. Even when the model had never seen the trial’s results before—totally independent validations—the correlation was still 0.99! Thus, for the variables important in diabetes care, and for the outcomes we are most interested in, Archimedes can accurately represent the reality as we know it—and it does so astonishingly well.
In summary, these articles break new ground in the effort to model health care for a specific disease. The completeness and robust nature of Archimedes will make it a very useful resource for clinicians and many others. The American Diabetes Association, for example, is now using Archimedes to gain a better understanding of the nuances of prevention and screening, as well as the ramifications of the metabolic syndrome. Also, the American Diabetes Association is collaborating with the authors, Kaiser Permanente (the “owner ” of Archimedes), and our generous sponsor Bristol Myers Squibb, to ensure that Archimedes becomes available to everyone.
Archimedes promises to give us an extremely powerful, welltested, and inexpensive tool to address a wide variety of clinically realistic questions. Coupled with other advances in medical science, we are facing an exciting new era in diabetes care.
Footnotes

R.K., as an employee of the American Diabetes Association, has provided guidance in the validation of Archimedes and is assisting in the design of studies using Archimedes to address questions of clinical importance. He holds no financial or proprietary interest in Archimedes.
A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
 DIABETES CARE