Creating the Field of the “Physics of Cancer”


A recent review paper on metastatic spread brings to the fore one of the most important questions that needs to be addressed in the creation of a physics of cancer. The basic issue is, why do the cells in the primary tumor develop the genetic and epigenetic properties that enable them to migrate through the body and establish successful (from the tumor’s perspective) metastatic outposts. It is hard to believe that these properties are selected for by direct evolutionary pressure, as the probability of success is extremely small and cells acting independently would have a huge advantage if they just stayed put. An answer to this puzzle could inform treatments that deter metastatic spread, which after all is the cause of death in a vast majority of cases.

Before proceeding to possible resolutions, it is important to point out that metastatic growth seems to be difficult even for cells that have already done it once. One can take cells from a metastasis and implant them into fresh tissue (in a mouse model, of course) and determine the rate of successful colonization. One can then repeat the process several times to enrich for growth potential. It seems that even these cells can form secondary tumors with low probability (perhaps 1%). So, whatever the final bottleneck is in metastatic inefficiency, it does not appear that one can totally eliminate it by direct selection.

The simplest proposition to explain these observations would be to combine random genetic variation with the need for a minimum size of nucleation for tumors. The idea here is that as the primary tumor becomes more and more genetically unstable, clones emerge that are motile and that have a much reduced nucleation barrier. Exactly how long this would take to occur is impossible to estimate given our ignorance regarding the mapping from genotype to phenotype. This scenario could explain examples where cells leave the primary tumor quite early on (it is undoubtedly easier to turn on motility) but fail to establish metastases until much later. This delay could just be a numbers game (as the primary tumor grows, more cells are released increasing the cumulative odds) but surmounting an exponential barrier is usually accomplished by lowering the barrier, not increasing the prefactor. This concept does not explain why the barrier cannot become zero for the proper clone. It may be that colonization happens occurs before that (due to the large prefactor) and hence the cells never get a chance to become fully capable or it may be that a majority of the cells switch out of the capable state once growth begins; this is similar to differentiation of normal stem cells, and only the residual “cancer stem cells” are sill colonization-capable. This could be tested by attempt to enhance the percentage via selection based on stem-cell markers.

A different conceptual approach is taken by Norton and co-workers, with related work by Carlos Maley. In their formulation, cells are selected for migratory properties so as to help the primary tumor grow more efficiently. This motility may allow both for local motion and for what has been called self-seeding, namely that cells actually leave the tumor, circulate through the blood and lymph systems, and return back to the same tumor. Some evidence for this has been presented, again of course in a mouse model. It seems to me that this approach may help explain the motility transition but does nit really address why these cells can colonize foreign tissues; this again appears to be left to chance genetic changes of these now motile cells, changes that presumably are not needed to reattach to the primary tumor. This could all be investigated experimentally, bu has not yet been done so.

A last speculative idea returns to the notion of differentiation, but now within the context of the primary tumor. There are many cases in biology in which intercellular signaling maintains a small subpopulation of cells in a state which makes no sense from a purely individualistic perspective. One well-=known example is that of the persister phenotype in E. Coli, which have greatly diminished growth so as to provide an advantage to the colony as a whole. Tumors might create a reservoir of seeding cells (with both migratory and colonization capabilities) as insurance against environmental disaster; these cells would have some characteristics of stem cells, but need not be a fixed subpopulation. This idea meshes with the previous explanation for why the typical cells in a metastasis are  not very colonization-capable, but many, many questions remain. The role of genetic versus epigenetic degrees of freedom is obscure; cancer cells seem to make much more use of genetic variation than do, for example, bacterial colonies. It is less obvious how to explain “metastatic tropism” (the strong correlation between primary tumor site and metastasis location); in the nucleation picture, one can easily imagine a barrier that varies with target tissue type. Perhaps even the maintained seeding cells are not equally well-matched to alternate sites for spread, and probably even these cells have some sort of growth barrier to surmount.

It is our job as physicists to connect these conceptual pictures with doable experimental tests. It is only then that we can truly be on the road to creating the field of the “physics of cancer”.

2 Responses to “Creating the Field of the “Physics of Cancer””

  1. Sui Huang
    June 13, 2013 at 3:44 pm #

    WHY CANCER RESEARCH NEEDS THEORETICAL PHYSICS

    Biologists sitting on piles of omics results have data but no ideas. Physicists sitting in their office have ideas but no data. Can we combine the two to create a group of scientists who have ideas AND data? Such researchers would benefit cancer research. But this is more challenging than it sounds – yet not impossible.

    I would like to offer some elementary reasons for why I think cancer biology is in urgent need of an influx of ideas from physics, and in particular, theoretical physics. My claims can only be uttered by a biologist for I will essentially state that current biologists do not “think” – and being an experimental biologist by training myself will protect me from the suspicion of bias in this advocacy for theory. (Conversely, it will expose me to the accusation of treason from the emerging physics-bashing bio-blogosphere: : http://freethoughtblogs.com/pharyngula/2012/11/20/aaargh-physicists-again/; http://genotripe.wordpress.com/2012/11/19/cancer-does-not-give-us-view-of-a-bygone-biological-age/)
    In modern life sciences a deepening dichotomy has developed that splits scientists into two broad epistemic groups: there are people who have ideas but not data (the thinkers and theorists), and there are people with data but no ideas (the vast majority of cancer biologists). I would say, perhaps 90% of researchers actively involved in cancer genome sequencing projects, such as NCI’s TCGA program, belong to the latter group. Deep sequencing has replaced deep thinking. Scientists in these two epistemic groups inhabit almost disjoint spheres of research operation and culture. It is then obvious that combining these two groups will establish the desired configuration of ’having ideas AND data’.

    Unfortunately, the surge of well-meant attempts to promote interdisciplinary approaches to cancer, nicely epitomized by this PHYSICS & CANCER initiative, do as much to expose the fundamental incompatibility of these two epistemic cultures as they unite disparate approaches. The challenge we face in achieving a synergism between physicists and cancer biologists lies in a rarely articulated but prevailing asymmetry: while those that have ideas but no data naturally crave for data, those with data but no ideas do not appreciate what they lack: they contempt theories and avoid “deep thinking”. They question the very notion that any meaningful knowledge can come from reasoning in the abstract alone. We witness a historical reversal of roles: the biologist’s proverbial “physics envy” has transformed into the arrogance of empiricism which now comes in the extreme form of exclusively data-driven explanation. Herein formal hypotheses and theory take a backseat. Conversely, the physicists’ notorious intellectual arrogance has ceded to the hunger for data. The recognition that now the time is ripe to test long-held hypotheses has made them eager to reach across the disciplinary divide.

    That biologists “do not think” is an exaggerated statement made for the argumentation sake. Biologists, of course, do think: some of them do think rigorously about control experiments to separate out confounding effects, about the sample size to achieve statistical significance, about the best measurement methods; they think about how to analyze their data, how to find statistically robust patterns, etc. They ‘reason’ that knocking out of gene X will prove its postulated critical role in process Y. Thus, all of contemporary biologists’ intellectual activity is invested in menial problems that serve to support the process of observation and to cement the mechanistic role of their favorite protein beyond doubt. There is no “deeper” thinking about the existence of principles that would explain why something is the way it is (and not otherwise) and about the generality of such explanations (for a certain class of systems). Rigor has moved from scholarly reasoning to data collection.

    Thus, I use the term “deep thinking”, hopefully without sounding condescending, just as an operational shorthand for a type of academic thinking that I accuse experimental biologists of ignoring, suppressing or perhaps, even being incapable of: the reasoning about general, elementary principles and laws that govern biological phenomena and hence explain them, without considering the many distracting and obscuring idiosyncratic details. Such thinking that aims at formulating theories that can predict entire classes of behaviors, not specific instances, does not come natural to biologists. The last major achievement of such thinking in the life sciences was in macro-biology, most prominently epitomized by Darwin’s theory of evolution, followed but population genetics that led to the Modern Synthesis. At the level of micro-biology, which deals with molecules and cells within one organism, C. Waddington was one of the leaders in the attempt to move beyond description and to formulate general principles. But alas – the tender buds of such efforts were brutally crushed by the arrival molecular biology which brought us a different type of explanations – a huge success for science, certainly, but at another front of advance in understanding life that many had hoped for. Reflection (on fundamental principles) was replaced by reflexes (in discovering explanatory molecule – such as an oncogene).

    In contrast to the quest for general principles and laws that represent the rerum causas, molecular biology has evolved its own epistemic habits with an idiosyncratic scheme of explanation that lacks any formal theory. This, I think, is unique in the natural sciences. The sole explanatory principle that makes molecular biologists happy are “molecular pathways”, the embodiment of causation, that they now celebrate as “mechanistic understanding” – their rerum causae. N. Tinbergen and E. Mayer call this kind of explanation of a phenotype (or their equivalents in organismal biology), ‘proximate’ explanations, contrasting it to the ‘ultimate’ explanation that applies to entire classes and in physiology is best represented by the Darwinian principle of selective advantage – a general principle. Proximate explanations, or their equivalents in molecular biology, are embodied by a molecular mechanism, the “molecular specificity” which elates biologists (“Wow! – Wnt signaling is involved in cancer drug resistance”). The biological infatuation with the concept of ‘specificity’ of a causal mechanism is almost unknown in physics and directly opposed to the theoretical physicists’ emphasis on the generic, if not the universal. I would contend that knowledge of a molecular mechanism, however concrete and specific, does not constitute understanding but is merely a description of an observation at a lower level (which granted may have utility for drug development – see below). The Lucretian school’s notion of ‘felix qui potuit rerum cognescere causas’ is thus relative.
    For those physicists not familiar with the operation of the mind of a broad class of contemporary “non-thinking” biologists, let me illustrate this in the following way: Let’s assume we are all scientists from an automobile-free planet arriving on planet Earth. Enthralled by the sight of the moving cars, the biologists among us would experiment with them and soon find out that the deeper you press the gas pedal, the faster the car moves, and proudly announce the causal relationship which explains why cars move. The physicists in the expedition would consider this a mechanistic detail and prefer to understand the thermodynamics: why is it possible at all that a vehicle moves forward, without apparent eternal help, and has to do so under some conditions, but has a limit? Not understanding all the parts that she sees when opening the hood, they would opt to ignore them and instead develop a model of the kind of the Carnot cycle – which offers the ultimate explanation why it is possible that cars are auto-mobiles (to a well-defined extent). The biologists would then respond: That is not useful because it does not tell me what I need to do to make this car drive.

    Why do current biologists espouse observation and description of mechanistic causes but eschew “deep thinking” about principles? This has of course not always been so – just read any famed or not so famed biologist’s work of the pre-molecular biology era when reasoning was not relegated to the discussion section of experimental papers. Over the past decades, with the success of molecular biology, life sciences have, through education, training and self-selection of its members, thrown out the culture of thinking. The unfathomably powerful techniques of molecular cloning, which has given us a new looking glass to see the molecular events underlying biological processes, has not only obviated the need for abstraction (because now actually you see concretely what explains your observation) – but has also led to a complacency in reasoning about principles as ultimate causes. For instance, hardly a molecular cancer biologist would ask what is behind all those common properties of cancer, such as the inexorable development of drug resistance, no matter in what type of cancer and to what type of treatment – a set of universal features which I find quite stunning. Instead, they prefer to emphasize how every tumor type, even every tumor, is different. Some even suggest that cancer is not a class of disease but consist of many diseases. This relieves them from the duty of identifying common, more abstract principles.

    The comfort of taking refuge in simple, proximate molecular explanations eventually evolved into the active disdain for the quest for ultimate explanations that will involve “deep thinking” and possibly a formal theory. At the height of the success of molecular mechanism in the 1990 (before genomics took center stage) you had to check ‘deep thinking’ at the door before entering wet labs. Empiricism reigned again, its dictatorship powered by the irresistible techniques of molecular cloning. Physics envy is gone. I remember when the journal CELL in its early days, and still in celebration of the revolution of molecular cloning, would not accept manuscripts with equations in it. Who can blame them when the vanishing of a black band on a gel provided the irrefutable molecular explanation of an observation. Spoiled by this type of concrete causation epitomized by macromolecules, and embraced by journal editors, the discipline for rigorous, abstract and formal thinking has become vestigial.

    In cancer biology we see a particular ascendance of molecular concretism because it satisfies our natural longing for the immediate explanation of dreadful facts of life. The identification of as tangible a culprit of cancer as an oncogene has made the search for “deeper” explanations unnecessary. Instead, a proximate molecular explanation comes with the added bonus of offering a concrete target onto which we can more easily project the hope for a therapeutic intervention. But now the near universally disappointing long-term success rate of target-selective therapy defies the mechanistic rationale behind such drug design and glaringly exposes a certain deficit in biological thinking or, rather non-thinking. The arrival of omics technologies pushes biologists further into the regime of “having data (- lots of it) and no ideas” – the last straw that killed thinking. Empiricism becomes “discovery science”, and the art of erecting formal hypotheses (not ad hoc “bets” formulated as hypotheses to trick the granting agencies) is now lost.

    Theoretical physics is the most suited among the “quantitative sciences” to come to rescue and correct this current intellectual deficit in cancer biology. Yet it is perhaps the least welcomed by biologists among the physical sciences. By contrast , biophysicists (sensu strictiore), computer scientists, engineers, mathematicians are welcome by biologists – because they provide much needed help with instrumentation, measurements and data analysis. These quantitative disciplines have become subservient to molecular concretism: they serve to place the biologists’ hand-waving type of explanation into a quantitative framework but offer no really new perspective from a more encompassing category of thought. Such quantification of proximate causation is necessary but not sufficient. Attaching numbers, and perhaps some ad hoc mathematical models, to observations does not extend a mechanistic description into the realm of fundamental theoretical principles. A detailed description of the functional form of how the gas pedal depression relates to acceleration is still just an ad hoc, proximate explanation without general validity.

    Thus, it is not only for the quantitative modeling of our observations that we cancer biologists need the help of physicists. Instead, a close collaboration aimed at developing a theory of cancer is what we should seek. Biologists do not master the art of erecting a theory anymore. I is because we have lost such capability that we should welcome physicists who are at home in this domain of intellectual pursuit. While it is clear that knowing a causative molecular pathway is more useful than knowing what is the deeper reason for an universal behavior of cancer cells, much as knowing how to operate the gas pedal is more useful than understanding the Otto cycle of combustion engines, the utility of understanding fundamental principle in cancer should not even be a question. Unconditional, genuine curiosity-driven quest for understanding nature is part of the imperative of science and must complement pragmatic cancer research because the sole guide in an uncharted land is imagination and explorative urge and not just imagined destination on a dreamed-up map. This is the attitude that cancer biologists can learn from theoretical physicists. Thus, I found it utterly sad when cancer biologists proudly indicate that they are driven by the (noble) desire to cure patients but have no curiosity about the associated biology ( http://www.sciencemag.org/content/337/6092/282.short ). If we are to do everything we can using science to help cancer patents, then we also should do everything we can to do good science.

    The deficits in the “culture of thinking” in cancer biology presented here can be corrected by theoretical physics who would not only provide their specific technical expertise, say to model the optimal dosing scheme, but more urgently, should import their epistemic culture to experimental biology. But there is a danger: that the purist culture gets lost and instead, a theoretical physicist acquires the local customs of the new land she has immigrated to. Some colleagues indeed act like the typical cancer biologists but turned out to have been theoretical physicists in their early life: in entering the realm of empirical cancer biology a subset of theoretical physicists may also get used to check their “deep thinking” at the door and adopt the culture of formulating proximate, easily fundable hypotheses. Such apparent assimilation may be due to self-selection of those physicists who prefer data-driven hand-waving and eschew reasoning about fundamental principles and who therefore have switched to biology. Yet we must now hope that there exists theoretical physicists who enter cancer biology with the conscious and resolute intention to bring their way of thinking to cancer research. They then will enrich cancer biology by occupying a still available, but hidden niche and become the new breed of cancer researchers who have both new ideas and data.

  2. Peter Jung
    January 23, 2014 at 9:19 am #

    I can assure everyone that biologists can think (even deeply). I do work with an incredibly smart neuroscientists and I learn a lot. I think the problem in creating meaningful collaborations lies in part with physicists who do not want to appreciate the complexity of phenomena such as cancer and insist on explaining the world with Ising type models. Just getting data from an MD will not make modeling meaningful or successful. Only real collaborations where physicists actually listen to biologists and are willing to accommodate biological reality will result in work which addresses relevant questions and provides meaningful answers.

    Why do you guys start talking to biologists before you condemn them as dumb. Maybe you can learn something 🙂

Leave a Reply

Leave your opinion here. Please be nice. Your Email address will be kept private.