Advances in Bioscience and Biotechnology
Vol.05 No.11(2014), Article ID:51076,8 pages

Replacement of Process Scale Chromatography by Counterflow Membrane Cascades

Edwin N. Lightfoot1*, Barış Ünal2

1Department of Chemical Engineering, University of Wisconsin, Madison, WI, USA

2Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA

Email: *,

Copyright © 2014 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

Received 27 July 2014; revised 17 September 2014; accepted 19 October 2014


Invention and innovation, always important, become ever more so in these fast changing and competitive times. They are in addition primarily dependent upon the dynamic behavior of the human mind. Our underlying purpose here is to examine these creative processes and to provide means to make them more effective. This is a timely effort because our understanding of perception and its interpretation by the human brain is very rapidly advancing. Even experimental insights into mental activity can be determined with rapidly increasing effectiveness. The framework of our discussion is that of evolution dynamics, and the scientific bases are rapidly developing neural sciences. However the bulk of our discussion deals with a specific example: the replacement of process scale chromatography by membrane-mediated steady counterflow in downstream processing. We do this because inventive activities must depend upon intimate knowledge of the systems available.


Invention, Innovation, Membrane, Chromatography

1. Introduction

Our underlying goal here is to seek successful innovations in large and poorly understood process developments or other creative activities taking place in multi-dimensional, ill defined, and typically discontinuous, parameter spaces. We represent these spaces as fitness diagrams (Figure 1 as in [1] ) in which value of our system or

Figure 1. A simple fitness diagram.

process is represented by the ordinate, or vertical axis, as functions of what can be many great characteristics in a multidimensional graph. Since we cannot draw multidimensional graphs, we are in practice limited to looking at two. A very simple example is seeking the highest point in a mountain range. Under foggy conditions we may end up at a secondary peak, and may be utterly incapable of reaching the primary peak without first descending to a lower altitude. If all we have to guide us is an altimeter, we cannot do this, and we will never reach the main peak. In seeking to optimize any process, our tendency is to use the equivalent of an altimeter, and it is all too common to stay with a suboptimal solution to our search. However, from time to time a more daring investigator, with better vision and/or more daring, does find a higher peak. Our goal is to make such successes more common.

A more abstract example is shown in Figure 1 [2] . No shift from one major peak to the other can be obtained by simple differential optimization routines: we will always return to our initial optimum. This situation is even more difficult if the parameter space involved is multi-dimensional or even discontinuous. The basic problem here is that no preferred a priori process is possible initially, and there is consequently no general description to optimize properly by continuing small changes. This problem has been articulated by the French mathematician Henri Poincare and a common English translation of his claim takes the form “whereas it is by logic that one proves it is by intuition that one invents” [3] .

However, it appears that it is Leslie Valiant who has best articulated this problem [4] . He suggests that the optimum way to resolve the many complex and difficult decisions we must continually make is first to seek probably approximately correct solutions, and that evolutionary stresses over evolutionary time have made us surprisingly apt to do well. Valiant goes on to develop computer-based programs for this task, but we don’t think we need deal with these or other aspects of this fine book, here. He points out that it is right here that the human mind is very often superior to even the most powerful computers. Note that even for the highly ritualized game of chess, computers are at their limits competing against human masters of the game. At the same time becoming a chess master is no simple problems, and it is poorly formulated problems of this type that we are concerned with here.

In a very real sense Valiant reinforces Poincare’s comment and tells us what we want to know: we must depend upon the poorly understood powers of the human mind in seeking the multi-dimensional basin dominated by a potentially better optimum. This is essentially Valiant’s probably approximately correct solution. Once we are there, we can revert to our present practices of differential improvement. This combination is the essence of true professionalism. Now how do we get there?

We start by briefly reviewing what was known about behavior of the mind as summarized by Burton a few years ago [5] . He explains this via the chart reproduced here as Figure 2 in [5] . It is shown there that any input to the brain is broken up for storage into what can be very many simple elements, each of which is processed through what is called a hidden layer. The resultant elements can then be recombined, again through a hidden layer to produce an output that is our present primary interest here. The processing is done in systems composed of neurons, each of which is in effect acting as on/off switches. These systems however, can be quite complex. This is not so different from computer programs built from a series of ones and zeroes. The hidden layers in turn are in part the result of prior individual experience and in part genetic, or ancestral, experience.

Thus every audio-visual input is decomposed and stored or expressed in a variety of partial memories in ways

Figure 2. Neural networks.

strongly dependent on past history of the receivers. It is right here that we depend on our past history, different for each of us. In Burton’s words “your red is not my red” and your response to a problem statement may well not be mine. However, the overall process tends to be conservative for each of us as the result of many millennia of facing common enemies and other recurring problems. Humans are highly social animals, and major innovators are always rare. It is our underlying purpose to increase this number.

The work of scholars like Burton and others [6] is extremely exciting and offers much in the future for both education and innovation. This area is receiving a great deal of attention, much of it described by Loewenstein [7] . Two aspects of Loewenstein’s analyses are of particular interest:

1) Only a very small portion of potential information impinging on sense organs such as the eyes and ears actually reaches the decision making centers of the brain;

2) A significant portion of this information can be understood only via quantum mechanics.

The filtering of information has been made to focus on recognized needs for organism survival over the very long period of prior evolution as well as individual experience: only a very small fraction of incident information has for example been found critical for these purposes by Darwinian survival. Quantum aspects have tended to be ignored by biochemists and physiologists who are responsible for much of the available literature. The brain for example is a very efficient quantum computer, and much of its computation takes advantage of parallel processing. Our color vision for example can only be understood in quantum terms [7] .

A more recent and detailed review is provided by Kaku [8] . We must become better acquainted with these developments for the future, but at the same time we must presently depend upon the large amount of descriptive material already available. We do a bit of this below.

We must now ask ourselves what personal traits might permit us to generalize the above procedures and how such traits might be developed. Here we summarize what we have been able to put together as a short summary of the literature that we have been able to date to read and organize in our own minds. We do this in the hope that others will join in this humble beginning and combine these empirical findings with what we expect will be rapidly increasing understanding of the brain and mind.

Personal Qualities and Situations

We begin here by asking what personal qualities are most successful to achieve our goals and how they most often depend upon the experience of the investigator, in some cases back to childhood. We begin by recalling to the reader the risks that were taken in the first paper of this series, none of which could be proven acceptable in advance, but all of which were successful. Here we will never be far from Albert Einstein’s recommendation of reading fairy tales for children hoping to become scientists and which is equally important for other creative endeavors. However our specific recommendation, in addition to studying additional examples, is to study the history of successful innovators. However let the reader beware: much of the following text is in defense of wool gatherers and day dreamers!

It is generally agreed (see for example [9] ) that drive, or grit, and curiosity are the most important two traits of successful innovators, and Hargittai backs up this assertion with short biographies of successful individuals, primarily Nobel laureates. However we believe that independence, or even ignorance, of existing practice, is also important. The second is of course extremely dangerous. It does appear for example that the determination of DNA structure by Crick and Watson was helped enormously by a peek at Rosalind Franklin’s data (see page 40 of [9] ). It is of course important to be as familiar with all of the technical background of any process being developed as possible. Finally, and we have yet to see these extremely important points expressed, successful inventors must have both the courage to state their views and a suitable audience to receive them.

Another characteristic that has received little attention in the above sources to date is that of the simplified umwelt: the perceived characteristics of the environment [10] . Barrett shows how very small brained animals can make surprisingly effective decisions by developing properly simplified umwelts. The concept of Occam’s razor is an important example. The classic absent minded professor may be another.

Extremely important are the past experience and ambiance of the investigator, and this is our major concern here. As Julie Andrews sang in Sound of Music, “nothing comes from nothing, and nothing ever will”. This light hearted truism is backed up in great depth and detail by Hofstadter and Sander [11] . These authors claim in 578 large pages of small type that their deceptively simple premise, analogy is the core of all thinking. It certainly can be surprisingly powerful and useful. It explains both Albert Einstein’s respect for fairy tales and Friedrich Kekule’s discovery of the structure of benzene.

Past experience, or accumulated experience of colleagues and associates, can at the same time be quite dangerous thanks to the nature of decision making. This is explained very neatly by Burton in terms of both neural organization and our social nature. In essence decisions are often weighted results of our past experiences, both within our brains and our interactions with like-minded colleagues. Dreaming can greatly weaken these constraints and thus broaden the range of possibilities.

Most basically we should pay attention to the last sentence of Hargittai (see page 300 in [9] ): in order to accomplish the most, whether in science or elsewhere, the best one can do is doing what one is best at doing. The subject of our previous paper, downstream processing of high-value pharmaceuticals, is presently led by the present domestic industry, and currently it is the best being done. However, it is much envied in large parts of the developing world. It behooves these present leaders to remain using the best possible, and the message of that paper is clear: present procedures are not the best possible, even at the level of current knowledge.

The next section is devoted to downstream processing as all inventive activities rely on specifics of a given application, but all creative individuals share the above summarized basic traits. This important aspect of creativity, in art as well as science and technology, is treated very perceptively by Noble Laureate Eric R. Kandel in his review of the late stages of the Austro-Hungarian Empire [12] . A somewhat less elegant aspect of scientific activity is described by Michael Brooks [13] . Life in the “ivory tower” is every bit as competitive as in the worlds of business, fine arts or sports!

In the majority of situations described in the accessible literature to date attention has been confined to the limiting case of two well defined governing parameters, justifiable in detail, that leads typically to a smooth continuum description and a single peak of fitness in continuous three dimensional space. This approach is strongly favored by most directors of research and development organizations, by granting agencies and, in addition, most investigators themselves. It is itself highly intuitive but over simplified, and quite often it leads to ultimate market failures: radically new revolutions in basic science, technology, medical procedures, business ventures, formal education or art forms are found by others, all at the ultimate expense of the over cautious “innovator”. Such timidity, or at least prudence, is probably a result of our ancestors leaving the relative safety of trees for open savannas filled with large predators, too dangerous to face individually. In any event, the basis of this caution is probably in the structure of the brain, specifically the hidden layer between input and decision where a “democratic” determined choice among inputs decides the final choice. The collegial environment of the investigator has a similar ambiguous effect. Large numbers of recent collapses of major industries or programs should be a warning to the overly timid, just as the failure of overly daring innovations should prove a warning to the rash. We shall be seeking here golden means between small but provable steps and fast and powerful but unproven approximations.

2. Innovations for Designing of Downstream Processing

We now examine the recovery of pharmaceuticals from dilute feed solutions. Our use of fitness diagrams again consists essentially of using parameters that are not well defined in any provable sense but which we feel is suitable for our purposes. If successful they decrease, often very markedly, the extent of parameter space that we must examine. That is one must learn to make good guesses, and incorporate them into a promising strategy. That is the basic aim of this paper.

2.1. Fitness Diagrams

We again base our discussion on the concept of evolutionary fitness diagrams introduced above [1] . These may be visualized as plots with the single ordinate of probable success in an evolutionary environment and multiple abscissas as parameters, whether of biological species, industrial processes, commercial businesses or artistic endeavors. Unfortunately the actual numbers of these parameters can be very large so fitness diagrams can seldom be constructed with total confidence: they are only conceptual guides. We must make this approach manageable by judicious approximations suggested by our general knowledge modified to describe the problem under investigation. An ability to do this successfully is to our minds the very definition of a true professional.

Many of the above ideas were used in the development of our previous paper [14] , beginning with the classification of tasks into three stages: concentration, fractionation and final purification with focus on the first two (Figure 3). These initial simplifications have been used to good effect for many years, and it seems to us that they have successfully stood the tests of time. This simplification, or one much like it, is necessary for effective use of development time. It is most unlikely to be challenged, but it has of course yet to be proven. This last can only be accomplished by finding a superior process.

We are more concerned here by our more specific assumptions:

1) Validity of the Sherwood plot relating ultimate cost to initial solute concentration;

2) Formation of a true constant form front in the concentration step;

3) Effectiveness of the membrane guided counterflow in the final major step.

We now look at these assumptions in detail.

2.2. Origin and Justification of the Sherwood Plot

This plot originated as something very close to daydreaming by the late Professor Thomas K. Sherwood of MIT, and one of us (ENL) is fortunate to have followed that development from the beginning: Tom Sherwood was a consultant to the Charles Pfizer Company while ENL was an employee just starting my professional career, and we became friends. Those were still the heroic days of our profession, and by far the majority of transport relationships were empiricisms expressed as binary relations on log/log graphs. He had plotted cost data against feed concentrations on a log-log graph as shown in Figure 4 for a variety of systems [15] , just for fun, and lo and behold he obtained a straight line to “engineering accuracy”. He joked about this to his friends and did not take it very seriously. The earliest written evidence we can find is Figure 5 that appeared in the introduction to the mass transfer text he co-authored with Bob Pigford and Charles Wilke [16] . It was only given as a sample of commodity prices and to make the general observation that concentration was a substantial source of expense. However as additional data accumulated the correlations looked better and better. More extensive data were in essential agreement with his initial guess.

No longer a joke this early guess was followed by many other investigators (e.g. ENL & MCMC) and widely accepted at the order of magnitude level. It is only a very minor step to realize that this capture step is quite probably by far the most expensive of the recovery process and justifies the attention we gave to it. That needs no further attention. The feed to the next step of the process begins at a higher magnitude of solute concentration.

Figure 3. Schematics of a downstream processing. 1) Concentration; 2) Fractionation; 3) Purification [14] .

Figure 4. Cost of various commodities [15] .

Figure 5. Cost data vs. feed concentration for various pure pro- ducts [16] .

2.3. The Concept of a Constant Form Front

The assumption of a constant form front depends upon a wide and deep familiarity with the dynamics of solute adsorption in packed columns. The development of constant form solute fronts for the solute in this first recovery step is backed by so much data and mathematical analysis that it is impossible to choose a particular source. One works with a high enough feed concentration essentially to saturate the adsorbent being used, and this ensures that the equilibrium ratio of product solute concentration in solution to that adsorbed onto the adsorbent particle increases with fluid phase solute concentration: under these conditions the intra-column solute profile approaches a constant-form front (Figure 6). Acceptable column operations in the laboratory-scale chromatographic studies have typically been long enough to recover all entering product solute and to approach this asymptote very quickly. It only remains to produce this asymptotic condition in a truly steady state operation. All one needs to do this is a change of the coordinate system to make this front stationary shown in Figure 7.

However, it remains to choose a physical system for doing this. At the moment simulated moving beds or SMB’s can be utilized with no significant modifications, but they are expensive both to buy and operate. The

Figure 6. A schematic of a batch adsorption column [14] .

Figure 7. The results of changing coordinates.

high pressure drops in commercial SMB’s still make this an expensive operation in terms of both capital and operating costs. Once again a separation process is limited by fluid mechanic factors.

It is our opinion that the membrane cascades we suggested in Figure 8 and Figure 9 are a far better long-term solution. The orders of magnitude are all quite favorable shown in Figure 10, and this application could go a long way to further the use of slurry counterflows. The combination of counterflow conditions plus crossflow stages is direct analogs of many existing large distillation columns.

Note that the membranes are primarily used to control macroscopic motion of adsorbent particle parallel to their surfaces while solvent flows across them provide the counterflow needed for effectiveness.

We have also shown that similar cascades, in which selective membranes served as a primary separating mechanism, performed quite well at laboratory scales [17] . However fouling is always a problem in process scale selective membrane based separations, and as a result such systems have not been successful. On our system membrane “pores” must be small compared to the absorbent particles. These latter are the source of selectivity, and they must be identical to those used in the laboratory scale system except for particle diameter.

Here then we were able to develop a novel system providing acceptable characteristics but lower manufacturing costs using only familiar components. The key was, as suggested above, to find the basin of a higher peak in an appropriate fitness diagram.

3. Looking Ahead

Space limitations have forced us to discuss just one specific example of the very large basic transport literature. We showed there that macroscopic information can provide a useful “jump” to more promising regions of parameter space. Much of this macroscopic information in turn is accessible in terms of dimensionless measures of velocity, temperature and species composition: friction factors (f), scaled heat transfer coefficients Nusselt (Nu) or Biot numbers and mass transfer coefficients Sherwood numbers (Sh) in terms of dimensionless groups of independent variables. These are defined and used to solve simple problems in many readily accessible sources such as any edition of Transport Phenomena [18] as well as the simplified edition in press (Bird, Stewart, Lightfoot and Klingenberg). The dependent variables upon which these quantities depend may also be written as ratios of time constants. Thus for diffusion to parallel walls, a distance W apart maintained at zero solute concentration in low-Reynolds number flow at zero solute concentration and length L solute exit concentration depends only upon the ratio given by Equation (1).

Figure 8. A simple square cascade [14] .

Figure 9. Packed columns vs. ideal counterflow slurry cascades [14] .

Figure 10. Orders of magnitude for designing membrane cascades.


For those without access to any transport books, Wikipedia [19] does an excellent job of introducing these principles.

We stop here for lack of time and space, with the hope of returning to deeper and more useful discussions.

4. Future Impact

Our primary goal in writing this and the first paper on this topic was to suggest how quickly and economically one can produce useful novel processes for those willing to use their imagination. More generally it is inspired by the great strengths of modern biology, both the power of evolutionary techniques and the great store of detailed knowledge upon which it draws. When a very few colleagues and one of us (ENL) first ventured into biotechnology (1947 in ENL’s case) it was with a sense of superiority. Now however, it is Mother Nature who is our teacher, and she is more impressive with each passing year.


  1. Johnson, N. (2008) N. Sewell Wright and the Development of Shifting Balance Theory. Nature Education, 1, 52.
  2. Loewe, L. (2009) A Framework for Evolutionary Systems Biology. BMC Systems Biology, 3, 27-57.
  3. Gray, J. (2012) Henri Poincaré: A Scientific Biography. Princeton University Press, Princeton, 608.
  4. Valiant, L. (2013) Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. Basic Books, New York, 208.
  5. Burton, R. (2008) On Being Certain: Believing You Are Right Even When You’re Not. St. Martin’s Griffin, New York, 272.
  6. Kellogg, R.T. (2013) The Making of the Mind: The Neuroscience of Human Nature. Prometheus Books, New York, 340.
  7. Loewenstein, W. (2013) Physics in Mind: A Quantum View of the Brain. Basic Books, New York, 352.
  8. Kaku, M. (2014) The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind. Doubleday, New York, 400.
  9. Hargittai, I. and Djerassi, C. (2011) Drive and Curiosity: What Fuels the Passion for Science. Prometheus Books, New York, 338.
  10. Barrett, L. (2011) Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton University Press, Princeton, 304.
  11. Hofstadter, D. and Sander, E. (2013) Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Basic Books, New York, 592.
  12. Kandel, E. (2012) The Age of Insight: The Quest to Understand the Unconscious in Art, Mind, and Brain, from Vienna 1900 to the Present. Random House, New York, 636.
  13. Brooks, M. (2013) Free Radicals: The Secret Anarchy of Science. Overlook TP, New York, 320.
  14. Lightfoot, E.N. and Cockrem, M.C.M. (2013) Complex Fitness Diagrams: Downstream Processing of Biologicals. Separation Science and Technology, 48, 1753-1757.
  15. Nystrom, J.M. (1984) Product Purification and Downstream Processing. 5th Biennial Executive Forum, A. D. Little, Inc., Boston.
  16. Sherwood, T., Pigford, R. and Wilke, C. (1975) Mass Transfer. McGraw-Hill Inc., New York, 512.
  17. Gunderson, S.S., Brower, W.S., O’Dell, J.L. and Lightfoot, E.N. (2007) Design of Membrane Cascades. Separation Science and Technology, 42, 2121-2142.
  18. Bird, R.B., Stewart, W.E. and Lightfoot, E.N. (1960) Transport Phenomena. 1st Edition, John Wiley & Sons, Hoboken, 808.
  19. Transport Phenomena. Available from:


*Corresponding author.