Exclusive Blog at WordPress.com
Speech WordPress Blog

Notes

Notes


Notes

  1. Although we have not performed a true meta-analysis, there are four recently published studies that seem to converge on this estimate that roughly one company in ten succeeds at sustaining growth. Chris Zook and James Allen found in their 2001 study Profit from the Core (Boston: Harvard Business School Press) that only 13 percent of their sample of 1,854 companies were able to grow consistently over a ten-year period. Richard Foster and Sarah Kaplan published a study that same year, Creative Destruction (New York: Currency/Doubleday), in which they followed 1,008 companies from 1962 to 1998. They learned that only 160, or about 16 percent of these firms, were able merely to survive this time frame, and concluded that the perennially outperforming company is a chimera, something that has never existed at all. Jim Collins also published his Good to Great (New York: HarperBusiness) in 2001, in which he examined a universe of 1,435 companies over thirty years (1965–1995). Collins found only 126, or about 9 percent, that had managed to outperform equity market averages for a decade or more. The Corporate Strategy Board’s findings in Stall Points (Washington, DC: Corporate Strategy Board, 1988), which are summarized in detail in the text, show that 5 percent of companies in the Fortune 50 successfully maintained their growth, and another 4 percent were able to reignite some degree of growth after they had stalled. The studies all support our assertion that a 10 percent probability of succeeding in a quest for sustained growth is, if anything, a generous estimate.

  2. Because all of these transactions included stock, “true” measures of the value of the different deals are ambiguous. Although when a deal actually closes, a definitive value can be fixed, the implied value of the transaction at the time a deal is announced can be useful: It signals what the relevant parties were willing to pay and accept at a point in time. Stock price changes subsequent to the deal’s announcement are often a function of other, exogenous events having little to do with the deal itself. Where possible, we have used the value of the deals at announcement, rather than upon closing. Sources of data on these various transactions include the following:
    NCR
    “Fatal Attraction (AT&T’s Failed Merger with NCR),” The Economist, 23
    March 1996. “NCR Spinoff Completes AT&T Restructure Plan,” Bloomberg Business News, 1 January 1997.
    McCaw and AT&T Wireless Sale
    The Wall Street Journal, 21 September 1994.
    “AT&T Splits Off AT&T Wireless,” AT&T news release, 9 July 2001.
    AT&T, TCI, and MediaOne
    “AT&T Plans Mailing to Sell TCI Customers Phone, Web Services,” The Wall Street Journal, 10 March 1999.
    “The AT&T-Mediaone Deal: What the FCC Missed,” Business Week, 19 June 2000.
    “AT&T Broadband to Merge with Comcast Corporation in $72 Billion Transaction,” AT&T news release, 19 December 2001.
    “Consumer Groups Still Questioning Comcast-AT&T Cable Merger,” Associated Press Newswires, 21 October 2002.

  3. Cabot’s stock price outperformed the market between 1991 and 1995 as it refocused on its core business, for two reasons. On one side of the equation, demand for carbon black increased in Asia and North America as car sales surged, thereby increasing the demand for tires. On the supply side, two other American-based producers of carbon black exited the industry because they were unwilling to make the requisite investment in environmental controls, thereby increasing Cabot’s pricing power. Increased demand and reduced supply translated into a tremendous increase in the profitability of Cabot’s traditional carbon black operations, which was reflected in the company’s stock price. Between 1996 and 2000, however, its stock price deteriorated again, reflecting the dearth of growth prospects.

  4. An important study of companies’ tendency to make investments that fail to create growth was done by Professor Michael C. Jensen: “The Modern Industrial Revolution, Exit, and the Failure of Internal Control Systems,” Journal of Finance (July 1993): 831–880. Professor Jensen also delivered this paper as his presidential address to the American Finance Association. Interestingly, many of the firms that Jensen cites as having productively reaped growth from their investments were disruptive innovators—a key concept in this book.
    Our unit of analysis in this book, as in Jensen’s work, is the individual firm, not the larger system of growth creation made manifest in a free market, capitalist economy. Works such as Joseph Schumpeter’s Theory of Economic Development (Cambridge, MA: Harvard University Press, 1934) and Capitalism, Socialism, and Democracy (New York: London, Harper & Brothers, 1942) are seminal, landmark works that address the environment in which firms function. Our assertion here is that whatever the track record of free market economies in generating growth at the macro level, the track record of individual firms is quite poor. It is the performance of firms within a competitive market to which we hope to contribute.

  5. This simple story is complicated somewhat by the market’s apparent incorporation of an expected “fade” in any company’s growth rate. Empirical analysis suggests that the market does not expect any company to grow, or even survive, forever. It therefore seems to incorporate into current prices a foreseen decline in growth rates from current levels and the eventual dissolution of the firm. This is the reason for the importance of terminal values in most valuation models. This fade period is estimated using regression analysis, and estimates vary widely. So, strictly speaking, if a company is expected to grow at 5 percent with a fade period of forty years, and five years into that forty-year period it is still growing at 5 percent, the stock price would rise at rates that generated economic returns for shareholders, because the forty-year fade period would start over. However, because this qualification applies to companies growing at 5 percent as well as those growing at 25 percent, it does not change the point we wish to make; that is, that the market is a harsh taskmaster, and merely meeting expectations does not generate meaningful reward.

  6. On average over their long histories, of course, faster-growing firms yield higher returns. However, the faster-growing firm will have produced higher returns than the slower-growing firm only for investors in the past. If markets discount efficiently, then the investors who reap above-average returns are those who were fortunate enough to have bought shares in the past when the future growth rate had not been fully discounted into the price of the stock. Those who bought when the future growth potential already had been discounted into the share price would not receive an above-market return. An excellent reference for this argument can be found in Alfred Rappaport and Michael J. Mauboussin, Expectations Investing: Reading Stock Prices for Better Returns (Boston: Harvard Business School Press, 2001). Rappaport and Mauboussin guide investors in methods to detect when a market’s expectations for a company’s growth might be incorrect.

  7. These were the closing market prices for these companies’ common shares on August 21, 2002. There is no significance to that particular date: It is simply the time when the analysis was done. HOLT Associates, a unit of Credit Suisse First Boston (CSFB), performed these calculations using proprietary methodology applied to publicly available financial data. The percent future is a measure of how much a company’s current stock price can be attributed to current cash flows and how much is due to investors’ expectations of future growth and performance. As CSFB/HOLT defines it,
    The percent future is the percentage of the total market value that the market assigns to the company’s expected future investment. Percent future begins with the total market value (debt plus equity) less that portion attributed to the present value of existing assets and investments and divides this by the total market value of debt and equity.
    CSFB/Holt calculates the present value of existing assets as the present value of the cash flows associated with the assets’ wind down and the release of the associated nondepreciating working capital. The HOLT CFROI valuation methodology includes a forty-year fade of returns equal to the total market’s average returns.
    Percent Future = [Total Debt and Equity (market) – Present Value Existing Assets]/[Total Debt and Equity (market)]
    The companies listed in table 1-1 are not a sequential ranking of Fortune 500 companies, because some of the data required to perform these calculations were not available for some companies. The companies listed in this table were chosen only for illustrative purposes, and were not chosen in any way to suggest that any company’s share price is likely to increase or decline. For more information on the methodology that HOLT used, see <http://www.holtvalue.com>.

  8. See Stall Points (Washington, DC: Corporate Strategy Board, 1998).

  9. In the text we have focused only on the pressure that equity markets impose on companies to grow, but there are many other sources of intense pressure. We’ll mention just a couple here. First, when a company is growing, there are increased opportunities for employees to be promoted into new management positions that are opening up above them. Hence, the potential for growth in managerial responsibility and capability is much greater in a growing firm than in a stagnant one. When growth slows, managers sense that their possibilities for advancement will be constrained not by their personal talent and performance, but rather by how many years must pass before the more senior managers above them will retire. When this happens, many of the most capable employees tend to leave the company, affecting the company’s abilities to regenerate growth.
    Investment in new technologies also becomes difficult. When a growing firm runs out of capacity and must build a new plant or store, it is easy to employ the latest technology. When a company has stopped growing and has excess manufacturing capacity, proposals to invest in new technology typically do not fare well, since the full capital cost and the average manufacturing cost of producing with the new technology are compared against the marginal cost of producing in a fully depreciated plant. As a result, growing firms typically have a technology edge over slow-growth competitors. But that advantage is not rooted so much in the visionary wisdom of the managers as it is in the difference in the circumstances of growth versus no growth.

  10. Detailed support for this estimate is provided in note 1.

  11. For example, see James Brian Quinn, Strategies for Change: Logical Incrementalism (Homewood, IL: R.D. Irwin, 1980). Quinn suggests that the first step that corporate executives need to take in building new businesses is to “let a thousand flowers bloom,” then tend the most promising and let the rest wither. In this view, the key to successful innovation lies in choosing the right flowers to tend—and that decision must rely on complex intuitive feelings, calibrated by experience.
    More recent work by Tom Peters ( Thriving on Chaos: Handbook for a Management Revolution [New York: Knopf/Random House, 1987]) urges innovating managers to “fail fast”—to pursue new business ideas on a small scale and in a way that generates quick feedback about whether an idea is viable. Advocates of this approach urge corporate executives not to punish failures because it is only through repeated attempts that successful new businesses will emerge.
    Others draw on analogies with biological evolution, where mutations arise in what appear to be random ways. Evolutionary theory posits that whether a mutant organism thrives or dies depends on its fit with the “selection environment”—the conditions within which it must compete against other organisms for the resources required to thrive. Hence, believing that good and bad innovations pop up randomly, these researchers advise corporate executives to focus on creating a “selection environment” in which viable new business ideas are culled from the bad as quickly as possible. Gary Hamel, for example, advocates creating “Silicon Valley inside”—an environment in which existing structures are constantly dismantled, recombined in novel ways, and tested, in order to stumble over something that actually works. (See Gary Hamel, Leading the Revolution [Boston: Harvard Business School Press, 2001].)
    We are not critical of these books. They can be very helpful, given the present state of understanding, because if the processes that create innovations were indeed random, then a context within which managers could accelerate the creation and testing of ideas would indeed help. But if the process is not intrinsically random, as we assert, then addressing only the context is treating the symptom, not the source of the problem.
    To see why, consider the studies of 3M’s celebrated ability to create a stream of growth-generating innovations. A persistent highlight of these studies is 3M’s “15 percent rule”: At 3M, many employees are given 15 percent of their time to devote to developing their own ideas for new-growth businesses. This “slack” in how people spend their time is supported by a broadly dispersed capital budget that employees can tap in order to fund their would-be growth engines on a trial basis.
    But what guidance does this policy give to a bench engineer at 3M? She is given 15 percent “slack” time to dedicate to creating new-growth businesses. She is also told that whatever she comes up with will be subject first to internal market selection pressures, then external market selection pressures. All this is helpful information. But none of it helps that engineer create a new idea, or decide which of the several ideas she might create are worth pursuing further. This plight generalizes to managers and executives at all levels in an organization. From bench engineer to middle manager to business unit head to CEO, it is not enough to occupy oneself only with creating a context for innovation that sorts the fruits of that context. Ultimately, every manager must create something of substance, and the success of that creation lies in the decisions managers must make.
    All of these approaches create an “infinite regress.” By bringing the market “inside,” we have simply backed up the problem: How can managers decide which ideas will be developed to the point at which they can be subjected to the selection pressures of their internal market? Bringing the market still deeper inside simply creates the same conundrum. Ultimately, innovators must judge what they will work on and how they will do it—and what they should consider when making those decisions is what is in the black box. The acceptance of randomness in innovation, then, is not a stepping-stone on the way to greater understanding; it is a barrier.
    Dr. Gary Hamel was one of the first scholars of this problem to raise with Professor Christensen the possibility that the management of innovation actually has the potential to yield predictable results. We express our thanks to him for his helpful thoughts.

  12. The scholars who introduced us to these forces are Professor Joseph Bower of the Harvard Business School and Professor Robert Burgelman of the Stanford Business School. We owe a deep intellectual debt to them. See Joseph L. Bower, Managing the Resource Allocation Process (Homewood, IL: Richard D. Irwin, 1970); Robert Burgelman and Leonard Sayles, Inside Corporate Innovation (New York: Free Press, 1986); and Robert Burgelman, Strategy Is Destiny (New York: Free Press, 2002).

  13. Clayton M. Christensen and Scott D. Anthony, “What’s the BIG Idea?” Case 9-602-105 (Boston: Harvard Business School, 2001).

  14. We have consciously chosen phrases such as “increase the probability of success” because business building is unlikely ever to become perfectly predictable, for at least three reasons. The first lies in the nature of competitive marketplaces. Companies whose actions were perfectly predictable would be relatively easy to defeat. Every company therefore has an interest in behaving in deeply unpredictable ways. A second reason is the computational challenge associated with any system with a large number of possible outcomes. Chess, for example, is a fully determined game: After White’s first move, Black should always simply resign. But the number of possible games is so great, and the computational challenge so overwhelming, that the outcomes of games even between supercomputers remain unpredictable. A third reason is suggested by complexity theory, which holds that even fully determined systems that do not outstrip our computational abilities can still generate deeply random outcomes. Assessing the extent to which the outcomes of innovation can be predicted, and the significance of any residual uncertainty or unpredictability, remains a profound theoretical challenge with important practical implications.

  15. The challenge of improving predictability has been addressed somewhat successfully in certain of the natural sciences. Many fields of science appear today to be cut and dried—predictable, governed by clear laws of cause and effect, for example. But it was not always so: Many happenings in the natural world seemed very random and unfathomably complex to the ancients and to early scientists. Research that adhered carefully to the scientific method brought the predictability upon which so much progress has been built. Even when our most advanced theories have convinced scientists that the world is not deterministic, at least the phenomena are predictably random.
    Infectious diseases, for example, at one point just seemed to strike at random. People didn’t understand what caused them. Who survived and who did not seemed unpredictable. Although the outcome seemed random, however, the process that led to the results was not random—it just was not sufficiently understood. With many cancers today, as in the venture capitalists’ world, patients’ probabilities for survival can only be articulated in percentages. This is not because the outcomes are unpredictable, however. We just do not yet understand the process.

  16. Peter Senge calls theories mental models (see Peter Senge, The Fifth Discipline [New York: Bantam Doubleday Dell, 1990]). We considered using the term model in this book, but opted instead to use the term theory. We have done this to be provocative, to inspire practitioners to value something that is indeed of value.

  17. A full description of the process of theory building and of the ways in which business writers and academics ignore and violate the fundamental principles of this process is available in a paper that is presently under review, “The Process of Theory Building,” by Clayton Christensen, Paul Carlile, and David Sundahl. Paper or electronic copies are available from Professor Christensen’s office, cchristensen@hbs.edu. The scholars we have relied upon in synthesizing the model of theory building presented in this paper (and only very briefly summarized in this book) are, in alphabetical order, E. H. Carr, What Is History? (New York: Vintage Books, 1961); K. M. Eisenhardt, “Building Theories from Case Study Research,” Academy of Management Review 14, no. 4 (1989): 532–550; B. Glaser and A. Straus, The Discovery of Grounded Theory: Strategies of Qualitative Research (London: Wiedenfeld and Nicholson, 1967); A. Kaplan, The Conduct of Inquiry: Methodology for Behavioral Research (Scranton, PA: Chandler, 1964); R. Kaplan, “The Role for Empirical Research in Management Accounting,” Accounting, Organizations and Society 4, no. 5 (1986): 429–452; T. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962); M. Poole and A. Van de Ven, “Using Paradox to Build Management and Organization Theories,” Academy of Management Review 14, no. 4 (1989): 562–578; K. Popper, The Logic of Scientific Discovery (New York: Basic Books, 1959); F. Roethlisberger, The Elusive Phenomena (Boston: Harvard Business School Division of Research, 1977); Arthur Stinchcombe, “The Logic of Scientific Inference,” chapter 2 in Constructing Social Theories (New York: Harcourt, Brace & World, 1968); Andrew Van de Ven, “Professional Science for a Professional School,” in Breaking the Code of Change, eds. Michael Beer and Nitin Nohria (Boston: Harvard Business School Press, 2000); Karl E. Weick, “Theory Construction as Disciplined Imagination,” Academy of Management Review 14, no. 4, (1989): 516–531; and R. Yin, Case Study Research (Beverly Hills, CA: Sage Publications, 1984).

  18. What we are saying is that the success of a theory should be measured by the accuracy with which it can predict outcomes across the entire range of situations in which managers find themselves. Consequently, we are not seeking “truth” in any absolute, Platonic sense; our standard is practicality and usefulness. If we enable managers to achieve the results they seek, then we will have been successful. Measuring the success of theories based on their usefulness is a respected tradition in the philosophy of science, articulated most fully in the school of logical positivism. For example, see R. Carnap, Empiricism, Semantics and Ontology (Chicago: University of Chicago Press, 1956); W. V. O. Quine, Two Dogmas of Empiricism (Cambridge, MA: Harvard University Press, 1961); and W. V. O. Quine, Epistemology Naturalized. (New York: Columbia University Press, 1969).

  19. This is a serious deficiency of much management research. Econometricians call this practice “sampling on the dependent variable.” Many writers, and many who think of themselves as serious academics, are so eager to prove the worth of their theories that they studiously avoid the discovery of anomalies. In case study research, this is done by carefully selecting examples that support the theory. In more formal academic research, it is done by calling points of data that don’t fit the model “outliers” and finding a justification for excluding them from the statistical analysis. Both practices seriously limit the usefulness of what is written. It actually is the discovery of phenomena that the existing theory cannot explain that enables researchers to build better theory that is built upon a better classification scheme. We need to do anomaly-seeking research, not anomaly-avoiding research.
    We have urged doctoral students who are seeking potentially productive research questions for their thesis research to simply ask when a “fad” theory won’t work—for example, “When is process reengineering a bad idea?” Or, “Might you ever want to outsource something that is your core competence, and do internally something that is not your core competence?”
    Asking questions like this almost always improves the validity of the original theory. This opportunity to improve our understanding often exists even for very well done, highly regarded pieces of research. For example, an important conclusion in Jim Collins’s extraordinary book From Good to Great (New York: HarperBusiness, 2001) is that the executives of these successful companies weren’t charismatic, flashy men and women. They were humble people who respected the opinions of others. A good opportunity to extend the validity of Collins’s research is to ask a question such as, “Are there circumstances in which you actually don’t want a humble, noncharismatic CEO?” We suspect that there are—and defining the different circumstances in which charisma and humility are virtues and vices could do a great service to boards of directors.

  20. We thank Matthew Christensen of the Boston Consulting Group for suggesting this illustration from the world of aviation as a way of explaining how getting the categories right is the foundation for bringing predictability to an endeavor. Note how important it was for researchers to discover the circumstances in which the mechanisms of lift and stabilization did not result in successful flight. It was the very search for failures that made success consistently possible. Unfortunately, many of those engaged in management research seem anxious not to spotlight instances their theory did not accurately predict. They engage in anomaly-avoiding, rather than anomaly-seeking, research and as a result contribute to the perpetuation of unpredictability. Hence, we lay much responsibility for the perceived unpredictability of business building at the feet of the very people whose business it is to study and write about these problems. We may, on occasion, succumb to the same problem. We can state that in developing and refining the theories summarized in this book, we have truly sought to discover exceptions or anomalies that the theory would not have predicted; in so doing, we have improved the theories considerably. But anomalies remain. Where we are aware of these, we have tried to note them in the text or notes of this book. If any of our readers are familiar with anomalies that these theories cannot yet explain, we invite them to teach us about them, so that together we can work to improve the predictability of business building further.

  21. In studies of how companies deal with technological change, for example, early researchers suggested attribute-based categories such as incremental versus radical change and product versus process change. Each categorization supported a theory, based on correlation, about how entrant and established companies were likely to be affected by the change, and each represented an improvement in predictive power over earlier categorization schemes. At this stage of the process there rarely is a best-by-consensus theory, because there are so many attributes of the phenomena. Scholars of this process have broadly observed that this confusion is an important but unavoidable stage in building theory. See Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962). Kuhn chronicles at length the energies expended by advocates of various competing theories at this stage, prior to the advent of a paradigm.
    In addition, one of the most influential handbooks for management and social science research was written by Barney G. Glaser and Anselm L. Strauss ( The Discovery of Grounded Theory: Strategies of Qualitative Research [London: Wiedenfeld and Nicholson, 1967]). Although they name their key concept “grounded theory,” the book really is about categorization, because that process is so central to the building of valid theory. Their term “substantive theory” is similar to our term “attribute-based categories.” They describe how a knowledge-building community of researchers ultimately succeeds in transforming their understanding into “formal theory,” which we term “circumstance-based categories.”

  22. Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Boston: Harvard Business School Press, 1997).

  23. Managers need to know if a theory applies in their situation, if they are to trust it. A very useful book on this topic is Robert K. Yin’s Case Study Research: Design and Methods (Beverly Hills, CA: Sage Publications, 1984). Building on Yin’s concept, we would say that the breadth of applicability of a theory, which Yin calls its external validity, is established by the soundness of its categorization scheme. There is no other way to gauge where theory applies and where it does not. To see why, consider the disruptive innovation model that emerged from the study of the disk drive industry in the early chapters of The Innovator’s Dilemma. The concern that readers of the disk drive study raised, of course, was whether the theory applied to other industries as well. The Innovator’s Dilemma tried to address these concerns by showing how the same theory that explained who succeeded and failed in disk drives also explained what happened in mechanical excavators, steel, retailing, motorcycles, accounting software, motor controls, diabetes care, and computers. The variety was chosen to establish the breadth of the theory’s applicability. But this didn’t put concerns to rest. Readers continued to ask whether the theory applied to chemicals, to database software, and so on.
    Applying any theory to industry after industry cannot prove its applicability because it will always leave managers wondering if there is something different about their current circumstances that renders the theory untrustworthy. A theory can confidently be employed in prediction only when the categories that define its contingencies are clear. Some academic researchers, in a well-intentioned effort not to overstep the validity of what they can defensibly claim and not claim, go to great pains to articulate the “boundary conditions” within which their findings can be trusted. This is all well and good. But unless they concern themselves with defining what the other circumstances are that lie beyond the “boundary conditions” of their own study, they circumscribe what they can contribute to a body of useful theory.

  24. An illustration of how important it is to get the categories right can be seen in the fascinating juxtaposition of two recent, solidly researched books by very smart students of management and competition that make compelling cases for diametrically opposite solutions to a problem. Each team of researchers addresses the same underlying problem—the challenge of delivering persistent, profitable growth. In Creative Destruction (New York: Currency/ Doubleday, 2001), Richard Foster and Sarah Kaplan argue that if firms hope to create wealth sustainably and at a rate comparable to the broader market, they must be willing to explore radically new business models and visit upon themselves the tumult that characterizes the capital markets. At the same time, another well-executed study, Profit from the Core (Boston: Harvard Business School Press, 2001), by Bain consultants Chris Zook and James Allen, drew upon the same phenomenological evidence—that only a tiny minority of companies are able to sustain above-market returns for a significant time. But their book encourages companies to focus on and improve their established businesses rather than attempt to anticipate or even respond to the vagaries of equity investors by seeking to create new growth in less-related markets. Whereas Foster and Kaplan motivate their findings in terms of the historical suitability of incrementalism in a context of competitive continuity and argue for more radical change in light of today’s exigencies, Zook and Allen hold that focus is timeless and remains the key to success. Their prescriptions are mutually exclusive. Whose advice should we follow? At present, managers grappling with their own growth problems have no choice but to pick a camp based on the reputations of the authors and the endorsements on the dust jacket. The answer is that there is a great opportunity for circumstance-focused researchers to build on the valuable groundwork that both sets of authors have established. The question that now needs answering is: What are the circumstances in which focusing on or near the core will yield sustained profit and growth, and what are the circumstances in which broader, Fosteresque creative destruction is the approach that will succeed?