Title Page


7. Phase Transitions, II: The Magic Number 150





Edwin Land and the Moses Trap

When leaders anoint the holy loonshot

Imagine this scene: A cavernous warehouse filled with the faithful followers of a wildly popular consumer technology company. The company’s charismatic CEO walks onto the stage, holding the secret new product it has been hinting at for over a year. The crowd quiets as the CEO lifts the product in the air. Behind the stage, assistants who spent weeks preparing for this moment hold their breaths. The CEO presses a button. The demonstration works, the crowd goes wild. The product and the CEO make the covers of gushing news magazines: Time declares the product “a stunning technological achievement”; Fortune says it is “one of the most remarkable accomplishments in industrial history.” The CEO promises that the product will transform the industry and become a spectacular hit: “You just can’t stop using it once you start!”

That must be Steve Jobs introducing the iPhone, right? It’s not. It’s Edwin Land, introducing the Polaroid SX-70—their iconic, pyramid-shaped, collapsible, instant-color-print camera—35 years earlier, in 1972. For 30 years, Polaroid scientists produced one Nobel-caliber breakthrough after another. They created new molecules, unlike anything seen before, that achieved the impossible—instant color prints. They invented a new theory of color vision that changed our understanding of the brain. They solved the century-old problem of separating light into its components, technology used in every smartphone display and computer monitor. The company was the glamour stock of its day, reaching new highs every year as rabid fans bought and bought.

And then something changed. The magic faded. Polaroid declined, descended into debt, and eventually filed for bankruptcy.

Juan Trippe began with a small airline taxi service and built a large airline empire. Edwin Land began with a hidden property of light and built an empire famous for something completely different. Both empires followed similar cycles to similar ends. Loonshots fed a growing franchise, which in turn fed more loonshots.

But as recently declassified documents show, Land led another life. That life sheds new light on the trap at the end of the cycle—and how to escape it.


A beam of light has three familiar properties: direction, intensity, and color. It also has a hidden fourth property, called polarization. Imagine a drone flying level to the ground. The drone can have wings parallel to the ground, rotated 90 degrees, or at any angle in between. Polarization of light acts like the wings on the drone. A light beam traveling parallel to the floor can be polarized horizontally, vertically, or at any angle in between. Our eyes can’t detect polarization, so we don’t see it.

Although the name of his company eventually became synonymous with something else, Edwin Land built the Polaroid Corporation by inventing remarkable uses for this hidden property of light.

If you are a Star Wars fan, you might remember the asteroid scene in The Empire Strikes Back (1980). TIE fighters are chasing the Millennium Falcon , piloted by Han Solo with Chewbacca and Leia at his side. Han steers into an asteroid field (“Never tell me the odds!”), plunges deep into a big cave on an asteroid, and lands the ship, waiting for the TIE fighters to pass by. The three step out to look around. They quickly realize the “cave” is not quite what they thought. They race back to the Falcon , fire it up, and fly at full speed toward the rapidly closing heavily fanged jaws of the giant worm (technically, an exogorth), in whose mouth they’d parked. The Falcon is horizontal. The worm’s teeth are vertical. At the last second, Han flips the ship 90 degrees and escapes through the narrow slits between the teeth. The jaws snap shut behind him.

Polarizing filters function like the worm’s teeth: a vertical filter only lets through vertically polarized light. The Falcon vertical passes through. The Falcon horizontal does not.

Land had wanted to make his own polarizer since age 13, when as a summer camp counselor he’d used a block of Icelandic crystal (a natural polarizer) to make the glare from a tabletop disappear. For a century, people had attempted to create a practical polarizer to unlock the mysteries of light, but no one had succeeded. Years later, Land became known for a saying: “Do not undertake a program unless the goal is manifestly important and its achievement nearly impossible.” He began that summer. He slept with a book called Physical Optics underneath his pillow. He read the book “nightly in the way that our forefathers read the Bible.”

At age 17, Land enrolled at Harvard. A few months later he left, bored of being surrounded by wealthy kids with no ambition. Land moved to New York City and convinced his skeptical father to continue his college allowance while he pursued his dream (as part of the bargain, he agreed to enroll for a semester at New York University). He rented a room just outside Times Square, set up a small lab in the basement, and began working round the clock on his idea. Years later, Land said, “There’s a rule they don’t teach you at Harvard Business School: if anything is worth doing, it’s worth doing to excess.” He persisted, but had no luck with his polarizer idea.

In the face of impossible challenges, where do you go? As we saw in the last chapter, the 42nd Street branch of the New York Public Library. There, Land pored through every book on optics he could find, frequently with a young research assistant he had hired named Helen (Terre) Maislen. Just like Trippe, Land found a clue in the back of an old book.

Sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those microscopic crystals, called herapathite, turned out to be the highest-quality polarizers ever discovered. Scientists had tried for decades, starting in the mid-nineteenth century, to grow the crystals and make useful polarizers out of them. But they failed—the tiny crystals are impossibly fragile—and the field eventually gave up. The discovery had been written out of physics textbooks and the Encyclopedia Britannica . Webster’s dictionary listed “herapathite” under “obsolete words.” The graveyard of unexplained experiments, as Land would soon show, is a great place to find a False Fail.

Land came up with a crazy idea: embed millions of those tiny crystals into some kind of goo (he used a nitrocellulose lacquer) and find a way to get them to line up. After a handful of failures, Land decided to try using a magnetic field to line them up, like a magnet can align small iron filings. He knew of a high-powered magnet at a physics lab at Columbia University. Since he wasn’t a student and had no privileges at the university, Land snuck into the building, climbed out onto a sixth-floor ledge, and entered the lab through a window. Land had placed a thin layer of his dark crystal-goo mix inside a plastic cell the size of a quarter. As soon as he placed that cell near the magnet, the dark cell turned transparent. The magnet had done the trick—it aligned the miniature crystals, allowing light to shine through: polarized light. Millions of miniature Millennium Falcons streaked toward the plastic cell, but only vertically angled ones could slip through.

It was, he said later, “the most exciting single event in my life.” He had created the first man-made polarizer. He was 19 years old.

The following year, Land returned to Harvard. Two months later, he married Terre. He now had access to a lab—but Terre didn’t; women at that time were not allowed in labs. So Land would sneak Terre into the physics lab to help him with his experiments. Once again, after a short stint, Land grew restless. Within two years, he abandoned the academic world to start what would soon be known as the Polaroid Corporation.


Land’s first big idea was to use his new technology to cut glare from headlights in cars. Headlight glare, at the time, was blamed for thousands of highway fatalities every year. Land realized that coating every car headlight and windshield with a 45-degree filter would allow drivers to see light from their own headlights but not from those of oncoming drivers. To understand why, imagine a child running forward, pretending to be a plane, left arm-wing pointing down to the ground at 45 degrees, right arm-wing pointing to the sky at the same angle. The arms of a second child, running toward the first one, who does the exact same thing, are exactly perpendicular (the four arms form an “X”). The cross-polarized light from an oncoming car can’t pass through a driver’s windshield for the same reason that a horizontal ship won’t pass through a vertical slit. Although Land pleaded with automobile manufacturers for two decades, he could never convince them to adopt his idea.

In the meantime, Land discovered a surprising benefit of polarized lenses. Sunlight reflecting off horizontal surfaces—a still lake or a field of snow, for example—tends to be horizontally polarized. Lenses coated with a vertical-slit film block those reflections far more effectively than ordinary, tinted lenses. The results can be dramatic.

In July 1934, while auto manufacturers were debating and declining his headlight idea, Land arranged a meeting with American Optical, a manufacturer of glasses, at the Copley Hotel in Boston. Land arrived early. A guest would have observed a sharply dressed young man with a piercing stare—one early employee described meeting Land for the first time and feeling that Land “could see into my head. It was really a kind of interesting sensation of having your head briefly searched for content.” With his bright eyes, firm jaw, and dark hair slicked and parted, he looked like a movie star. Imagine a young Cary Grant playing the role of an obsessed genius—that’s Edwin Land.

Land arrived at the Copley Hotel carrying a goldfish bowl. He asked the desk clerk for a room with western exposure, facing the setting sun. A journalist described what happened next:

After the bellboy had left, he [Land] placed the bowl on the window sill where it would catch the sun, stood back, inspected it, then moved it so that the reflected glare became more intense. Then he paced nervously and waited for a knock on the door.
As soon as his visitor, an official of the American Optical Company, arrived, he led him to the window and asked him to look into the bowl.
“Do you see any fish?” he said.
The man squinted and shook his head. The reflection from the water was too dazzling.
“Look again,” said the young man, holding before the bowl what appeared to be a sheet of smoky cellophane.
The glare was gone as if by magic, and every detail of the idling fish could be clearly seen. The visitor … was familiar with every kind of sunglass on the market, but he had never seen anything like this.

Land had his first deal. Sailors, pilots, skiers, and other outdoorsmen soon snapped up the new “polarized” sunglasses, Polaroid’s first big hit.

Then the military discovered that eliminating glare from the sun improved gunners’ ability to sight aircraft, tanks, and surfaced submarines. The Army and Navy ordered millions of polarized goggles. During World War II, General Patton appeared on the cover of Newsweek wearing Polaroid goggles. A Life magazine story noted that “every second man in combat” wore them.

The seeds of franchise were growing.

Land soon realized that putting two polarizing filters together produces some striking, and useful, effects. Coat the front of a pair of goggles with a vertically polarizing film; on the back put a polarizer that can rotate inside the frame of the goggles. A tiny handle is attached to that back polarizer, which pokes out of the frame of the goggles at the twelve-o’clock position. When the handle is at twelve o’clock, the two filters line up, and all the light coming in the front goes through the back. But as you rotate the back polarizer, by sliding the handle through ninety degrees toward the three-o’clock position, less and less light makes it through. At exactly ninety degrees—when the front filter is vertical and the back filter is horizontal—no light gets through. Adjustable-shade goggles, which allowed pilots to quickly adjust from low-light to bright-light conditions, were another big Polaroid hit.

Today, if you use a laptop or smartphone or watch something on an LCD screen, you are using a variation of this trick, with a twist, all made possible by Edwin Land’s invention.


Think of a barn with sliding doors on opposite-facing sides. The back doors slide down from the roof and up from the ground, meeting in the middle, and are closed to a horizontal slit. The front doors slide from the left and from the right and are closed to a vertical slit in the middle. A drone flies through the back opening with its wings horizontal, rotates ninety degrees inside the barn, then flies out through the front opening with its wings vertical.

Now suppose the barn came with a switch. Turning on the switch jams any electronics. Drones can’t rotate while inside the barn. Any drone flying through the horizontal slit in the back will stay horizontal and crash into the front door. No drone can get through.

LCD pixels work just like those barns.

The back of a pixel on an LCD display screen has a horizontal filter. The front has a vertical filter. Unlike the drone, light cannot rotate on its own traveling through empty space. It needs help. So pixels are filled with a special kind of goo called a liquid crystal, made of billions of microscopic rods, like tiny toothpicks—just like Land’s original polarizer. But in this case, the goo is sandwiched between the pixel’s horizontal-filter back door and vertical-filter front door. The toothpicks automatically line up horizontally next to the back and vertically next to the front. In between, they form a kind of twisted, quarter-turn spiral staircase, which connects the back and front. The spiral staircase does the work of rotating the light. Light enters through the horizontal opening in the back, travels through the staircase, its polarization rotating by a quarter turn, then flies out the vertical opening in the front and into your eyes. Just like the drone streaking through the barn.

Each pixel, however, comes with a tiny digital switch. Turning the switch on fires up a tiny electric field that scrambles the toothpicks and crashes the spiral staircase. No light can get through. The pixel goes dark. Turning the switch off restores the spiral staircase. The pixel lights up. And there you have it: a digitally controlled on/off light pixel.

The original iPhone screen squeezed in 320 of these digital pixels across and 480 pixels down. Today’s smartphone screens and high-definition TVs are made with more than two million pixels.

I mentioned at the start of the chapter that our eyes can’t detect polarization. It turns out that many people can train their eyes to pick up one subtle signal. If you look at a white area on an LCD monitor and rotate your head, you may see a small, faint yellow hourglass shape appear and then fade. That image, a weird optical effect known as Haidinger’s brush, comes from a tiny sensitivity in the back of our eyes to polarized light.

LCDs use polarized light and two filters to create on/off light pixels

Land’s polarizing filters gave rise not only to smart displays and strange tricks, but also to a technology that excited, oddly, both artists and the military. That discovery steered Land toward Polaroid’s most famous invention, as well as a 30-year journey that would become the ultimate example of the Moses Trap.


In the 1920s and 1930s, Clarence Kennedy, an art history professor at Smith College, an all-women’s school in western Massachusetts, produced haunting photographs of sculptures, especially Italian masterpieces. Some described the pictures as more beautiful than the originals. Kennedy cataloged famous collections and advised museums in New York, Boston, and San Francisco. Cities in Italy hired him to restore old monuments (when the Allies began their invasion of Italy in World War II, the US bomber command turned to Kennedy for a list of monuments to avoid). He was a perfectionist, according to a colleague, “but not one of those that irritate.”

In the 1930s, Kennedy became obsessed with improving the technology of sculpture photography. Could a two-dimensional image capture the beauty and depth of a three-dimensional form? He spoke with scientists at Eastman Kodak, the dominant photography company of the day. They directed him to a young inventor in Boston whose reputation, based on a new polarizing filter he had just invented, was rapidly growing.

Land quickly realized that his polarizing filters offered a surprising solution to Kennedy’s problem, a solution inspired by a childhood toy. As a boy, Land had played with stereoscopes. Peering into the small, binocular-like devices transported you into a magical world of three-dimensional boats and bridges and caves, where you would “hear the dripping water, smell the dampness, fear the darkness as you sat with your legs crossed under you on the chair in the dear old library.”

Stereoscopes create those worlds by presenting each eye with a slightly different image. Our brains use the differences between the images in each eye to reconstruct depth: the three-dimensional shape of a sculpture, for example. We perceive ordinary photographs as flat because both of our eyes see exactly the same image. Land realized that to “see” Kennedy’s sculpture photography in three dimensions, he would just need to provide each eye with a snapshot taken from a slightly different angle. And he could do that by using his favorite hidden property of light.

First, Land invented a method for fusing two polarized images—one vertically polarized, one horizontally polarized—onto one print. He then made inexpensive glasses with vertical polarizers on one lens, horizontal polarizers on the other. The left eye would see the first image; the right eye, the second. Demonstrating the technique at an optical society meeting held not long afterward in the middle of a presidential campaign, Land projected a fuzzy image on the screen. He asked the audience to put on the special Polaroid glasses, and then he asked Democrats to close their left eye and Republicans to close their right eye. Each group saw their candidate.

Next, Land asked Kennedy for a sculpture to photograph. Land took one picture, moved the camera over a few inches, and then took another. The shift in camera angle captured the difference between what our eyes would see. Land made one image vertically polarized, the other horizontal, and then fused them into one print. When a viewer put on his special polarized glasses, the flat print would burst off the page into glorious three-dimensional form. Land called his new system the vectograph.

In Washington, DC, shortly after his first meeting with FDR, Vannevar Bush heard about Land’s vectograph. Within a year, the Army and Navy were using 3D terrain maps to prepare for battles in Europe. Planes would fly over fields and landing beaches and take pictures a quarter mile apart. With the fused prints, soldiers could see trees or ditches they could use for cover, the contours of hills they would need to climb, and even the fake shadows painted as camouflage on enemy factories.

Audience for the 3D movie Bwana Devil (1952)

The technology was likely the first, and possibly the only, example of an art history project weaponized for military use.

Land’s 3D still images were soon converted for use in film, which turned into a craze. (At its peak, in 1953, Polaroid was making six million pairs of 3D glasses per week.) Although the novelty of early, low-quality 3D movies wore off, today’s 3D films use the same core science Land developed in 1940.

Kennedy’s influence on Land and Polaroid continued after 3D photography. He helped grow Land’s interest in the art world. Kennedy introduced Land to Ansel Adams, who became a close Polaroid advisor and friend to Land’s family, as well as Andy Warhol, Robert Mapplethorpe, Chuck Close, and many others. The art world endorsements added a dash of glamour to the technology, much like Lindbergh and the color spreads of celebrities flying jet planes did for Juan Trippe and Pan Am.

Kennedy also contributed one more unusual idea: recruiting Smith College art history majors. Few companies hired women for technical positions in the 1940s and 1950s. Fewer still recruited art history majors and trained them. Kennedy encouraged Land to break both taboos, which became a great advantage for the company; decades before the idea became popular, both Kennedy and Land understood that diversity enhanced creativity. One of Polaroid’s most critical technology breakthroughs came from a harpsichord-playing art history graduate from Smith named Meroë Morse, who rose to lead a major research lab for Land. (Morse and Land grew close. A biographer wrote that when Morse died, unmarried, after 20 years at the company working closely with Land, Land “lost a soul mate, a work mate, and a protector. His most severe quarrels with the technical and non-technical sides of his company sprang up after she was gone.”)

Meroë Morse

But Kennedy’s singular contribution to the history of business and technology, aside from inspiring Land’s interest in 3D images and workplace diversity, was to turn Land’s attention to photography.


In December 1943, on a family vacation in Santa Fe, Land went for a stroll with his three-year-old daughter Jennifer. After he snapped some photos of her, she asked him, “Why can’t I see them now?” Startled by the question, Land sent Jennifer to her mother. He continued his walk alone, thinking through the problem, turning over the question in his mind, applying insights he had learned from developing 3D photography. Thirty years later, he recalled the history of his invention to an audience of scientists and engineers: “Strangely, by the end of that walk, the solution to the problem [of instant photography] had been pretty well formulated. I would say that everything had been, except those few details that took from 1943 to 1972.”

Land and daughter

In traditional film photography, particles of light, called photons, land on film, leaving microscopic residues—a chemical memory. Think of small asteroids striking the surface of the moon, leaving tiny craters. Soaking the film in a developer enhances those residues a billionfold until the familiar negative emerges. It’s a negative image because the residues, where the light fell, are dark. To reverse the image and create the usual positive print, you shine light through the film onto white paper; a dark spot becomes white and a white spot becomes dark. Land’s insight was to combine those two steps, by developing the negative and the positive at the same time, inside the camera, using an ingenious chemical trick.

In Polaroid’s instant photography, negative and positive print layers are joined together, like a sandwich, inside the camera, separated by less than one-hundredth of an inch. Attached to the bottom of the sandwich is a small, sealed sac of developing fluid, called a pod. Exiting the camera, the pod passes through a roller, which breaks the sac. Fluid spreads evenly in the thin space between the two layers. The chemistry of that fluid is such that unexposed molecules on the negative, which are light, are suctioned across the thin gap and become dark. The exposed molecules on the negative stay put. Within 60 seconds, the two layers can be pulled apart—presto, an instant print. Jennifer has her photograph.

That “presto,” of course, required inventing dozens of technologies and conducting thousands of experiments, the vast majority of which failed—dozens of False Fails and Three Deaths. Land’s instructions to take on only those problems that are manifestly important and nearly impossible were his version of “It’s not a good drug unless it’s been killed three times.”

The first to be assigned experiments was Doxie Muller, one of Clarence Kennedy’s art history recruits. Land called her every morning at 6:30 a.m. to go over projects for the day. He would review her reports every night. Predawn calls from Land were common. “I had an idea about that problem we’ve been working on,” he would say. “Would you come in and meet me at five?” Another art-history-major-turned-chemist installed a separate phone line in her kitchen: “When the red phone rang, I’d look around to see if my children were killing themselves, and if not, I’d pick it up.”

Two years later, in early 1946, results looked promising, but Land felt experiments were moving too slowly. He announced to his team that Polaroid would demonstrate a working camera to the press and industry at the February 21, 1947, Optical Society meeting in New York. His horrified senior team objected; hundreds of technical hurdles remained. Land dismissed the objections. They would present a finished camera in February. The team found another gear.

Land’s deadline was about more than injecting urgency into a project team. After Land decided to withdraw from military contract work at the end of the war, sales plummeted from $17 million in 1945 to less than $5 million in 1946, and looked to be less than half that in 1947, threatening the survival of the company. One senior executive recalled, “There was very little income and lots more outgo.” Land had bet the company on instant photography.

On February 20, the day before the Optical Society meeting, snow started falling in New York at 4:30 p.m. By the morning it had grown into the largest blizzard in six years. Most of the city had shut down; events all along the East Coast had been canceled. Land and his team waited anxiously to see if the truck from Boston with the camera would make it through in time. It did, barely.

Edwin Land unveils the first instant-print picture

The team quickly assembled the camera for the afternoon presentation. After Land made a brief introduction, he asked the president of the society to come up to the stage. Land aimed, pressed the trigger, peeled apart the layers, and revealed the instant print.

“Everyone went wild,” one observer recalled. Scientific American described the technology as “one of the greatest advances in the history of photography.” The New York Times ran a long feature together with an accompanying editorial announcing that all prior photographic inventions were “crude compared with what Mr. Land has done.”

That same day, at a special session for the press, Land snapped a self-portrait from the neck up with the new camera. He peeled the print and held it next to his face. At eight by ten inches, it was nearly life size. The Times story led with a two-column photo, the inventor staring into the distance, unsmiling, jaw set. His disembodied head stares sadly out of the page at you, the reader. The haunting image was reprinted endlessly.

William Wegman and Andy Warhol Polaroid photography

Life : The SX-70

Polaroid sales grew from just under $1.5 million in 1948 to $1.4 billion in 1978. For 30 years, Polaroid dominated instant print like Pan Am dominated international travel: by delivering spectacular breakthroughs, year after year, which delighted customers. In both cases, a master P-type innovator at the top fueled those loonshots, which grew the franchise, which, in turn, fueled more loonshots. The wheel in the camera kept on turning. The dangerous virtuous cycle spun faster and faster.

Polaroid followed the first sepia prints in 1947 with black and white (1950); automatic exposure (1960); instant color (1963); non-peel-apart film (1971); the SX-70 all-in-one, foldable camera (1972); sonar auto-focus (1978); and countless other advances in between. For anyone interested in technology, the stories of these inventions are fascinating. To achieve instant color printing, for example, Land and his team invented a new molecule. As a side project, stimulated by a chance observation in the lab, Land invented a new theory of color vision, now called color constancy, which explains why we will see red apples as red even as the color of the light they reflect changes. Land seemed to produce one or two discoveries per year that others would be thrilled to see in a lifetime. One admiring scientist wrote, “Nobel Prizes have been given for less.”

With the improving technology came respectability. Instant-print photography, initially considered a toy by serious artists, grew into a new art form. Ansel Adams’s 1974 exhibit at the Metropolitan Museum of Art included twenty Polaroid prints. He used Polaroid for both the first presidential photographic commission (Jimmy Carter) and his El Capitan masterpiece. William Wegman’s dogs, Andy Warhol’s pop, Chuck Close’s faces: all were Polaroids.

The technology created not only new art but also new markets. Couples realized their prints would not be seen by technicians at developer labs. And so was born what Polaroid delicately called “intimacy” pictures. Polaroid’s growth was helped by a surge in demand for those intimacy pictures, just as years later the internet’s rapid growth would be fueled by pornography.

Regardless of the source of the demand, investors rewarded the growing revenues. Wall Street analysts routinely announced Polaroid’s shares were overvalued. But the price just kept going up. Fans kept buying and believing.

And then, like Juan Trippe and the 747, the master innovator at the top, creating and anointing loonshots, turned the wheel one too many times.


In 1888, Thomas Edison wrote, “I am experimenting upon an instrument which does for the Eye what the phonograph does for the Ear.” A few years later he used this motion-picture instrument to produce the first American short films. (The shorts included cats in a boxing ring, establishing an enduring principle of human nature: cat videos are always funny.) For the next hundred or so years, movie film was developed more or less like photographic film. A 35-millimeter movie camera captures 24 frames per second onto a film reel negative. The negative is processed in a lab. The biggest difference is that movie film is converted into transparencies through which we project light rather than the familiar solid prints we hold in our hands.

In the mid-1960s, Land began thinking about extending his instant-print technology to film. Instead of one image, over a thousand images would need to be processed, instantaneously, without error, for every 60 seconds of film. The process would require reinventing the chemistry of color development and film transparency. A manufacturing plant with entirely new equipment would need to be built on a massive, commercial scale. Several years earlier Land had said, “Do not undertake a program unless the goal is manifestly important and its achievement is nearly impossible.” The science and technology behind polarizing filters, instant print, and instant color had all seemed nearly impossible when Land dived in. This new challenge was exactly the kind of P-type loonshot worthy of his mind and energy. And so Land launched what became a ten-year, half-billion-dollar project to create instant-print movies.

At the 1977 Polaroid shareholders’ meeting in Needham, Massachusetts, surrounded by mimes and dancers, in a performance that a Wall Street Journal reporter wrote deserved an Academy Award, Land introduced the world to Polavision, announcing “the first public demonstration of a new science, a new art, and a new industry … a second revolution in photography.” A long-haired dancer in a white sailor suit, with red hat and scarf, emerged on stage and gradually began dancing. Land grabbed a small, elegant movie camera—24 ounces, about the size of a hardcover book—by its angled grip and began to film. After about a minute, he popped out a cassette and inserted it into a rectangular box with a 12-inch screen at one end, called the Polavision player. The player simultaneously rewound and developed the film. Ninety seconds later, the dancer appeared on the screen.

You have to pause to appreciate this, even today in the twenty-first century: processing an entire film negative, thousands of images, inside a consumer tabletop device, without error, while it rewinds, in 90 seconds .

Technology magazines raved: “The company that seems to specialize in turning impossible concepts into hardware has done it again,” wrote Popular Science. “The screen suddenly lit up and—to my astonishment—I saw the film I had just made,” wrote Popular Mechanics. “No Hollywood ‘rushes’ had ever reached the projection room faster. And no motion picture had ever before been shown without first going to a developing lab.” The Washington Post wrote, “For the remarkable Land … Polavision may well be the highlight of his career.”

The new plant produced over 200,000 Polavision machines. The film assembly line began cranking out cassettes. Andy Warhol shot Polavision shorts at his celebrity parties. John Lennon and Yoko Ono made a Polavision home movie with their son Sean. National marketing began in the spring of 1978.

So why haven’t you heard of Polavision? Because within a year, the product was dead. Customers weren’t buying it. The resolution and quality were superior to magnetic videotape, and the camera, like the SX-70, was a beautiful, lustworthy machine. But customers didn’t need that extra resolution for home movies. The elegant design could not overcome the convenience of alternatives. Videotapes and Super 8 film were cheaper, easier, and—in the case of videotape—erasable. With tape, you could record over last week’s scene of a cat coughing up a hairball. With instant-print film, you owned a glorious rendering of that scene, in beautiful artistic detail, and could watch it, in real time or surreal slow motion, over and over. But afterward, you had to buy more film, which was expensive. The Polavision camera would set you back close to $2,500 in 2018 dollars, and each three-minute cassette would cost an additional $30. It was too much.

One Wall Street analyst summarized, “This is a product that has much more scientific and aesthetic appeal than commercial significance.”

In 1979, Polaroid’s accounting firm insisted all unsold Polavision inventory be written off as a loss. It was the public-company equivalent of raising the white flag. Land objected ferociously. The auditors’ statement that “the marvelous result of scientific research embedded in Polavision had no utility,” Land said, “was accounting jargon, a cruel misuse of language.” The board of directors, of course, followed the accountants’ recommendation.

Shortly afterward, Polaroid shut down Polavision production permanently. The total project cost for that final year came to just over $200 million. At the urging of the board a few months later, Land resigned as CEO, although he stayed on as head of research. Two uncomfortable years later, he resigned that position as well. He sold his shares and cut all ties to the company he’d founded.

Like Pan Am, a revolving door of subsequent CEOs tried to restore the company’s image and edge, to catch up to loonshots nurtured by others. In the case of Pan Am, those were S-type loonshots: new strategies to lower costs or increase revenue per seat. In the case of Polaroid, they were P-type loonshots: video camcorders, home inkjet printers, and, of course, digital photography. As with Pan Am, it was too late.


Traditional photography exploits a chemical reaction. When enough photons (light particles) strike the silver molecules in film, the molecules change form. That creates a chemical memory of where photons landed. Under certain special conditions, however, when a photon lands, it can pop an electron out of an atom (the photoelectric effect). The loose electron can be trapped right where the photon landed, like catching a firefly in a jar. Trapped electrons signal their presence with voltage. The voltage forms an electrical memory of where photons landed.

In 1969, a small team at Bell Labs created a grid of pixels with just the right conditions to trap electrons popped out of atoms by photons. It was a microscopic grid of jars catching fireflies. They called it a CCD chip. The chips turned out to be up to a hundred times more sensitive than film. Within a few years, astronomers were using CCD chips to image distant stars. The first commercial cameras using CCDs, for businesses and professionals, appeared in the 1970s. The first consumer digital cameras using CCDs appeared in the mid-1980s.

Polaroid eventually introduced a digital camera in 1996, a decade after similar cameras appeared from Sony, Canon, Nikon, Kodak, Fuji, Casio, and others. It was too late. In 2001, Polaroid filed for bankruptcy.

On the surface—a very public surface—it seems like a brilliant but aging entrepreneur was blindsided by a loonshot: digital photography.

But that’s not quite what happened.

From 2011 to 2015, the US National Reconnaissance Office declassified a trove of documents related to spy satellites. The documents reveal a top-secret, high-stakes drama centered on imaging technology. Well before the first astronomers began using CCDs, before the first commercial CCD cameras, and before Sony and Kodak even began thinking about a consumer market, one person convinced the president of the United States, over the unanimous opposition of his senior military and political advisors, to invest in digital spy satellites. That person was Edwin H. Land.

It was the threat of nuclear war that drew Land into government service. In 1949, the Soviet Union detonated its first atomic bomb. One year later the Cold War turned hot when North Korean forces, backed by the Soviets, invaded South Korea, which was supported by the US. Fears of a nuclear World War III escalated. Shortly after taking office in 1953, President Eisenhower assembled an expert panel, led by the president of MIT, James Killian, to study the possibility of a surprise Soviet attack with nuclear missiles. The panel quickly concluded that the country lacked hard data on Soviet capabilities—missiles, bases, troop movements—and urgently needed new means to acquire that data. The panel needed someone who could not only advise on state-of-the-art imaging science but also anticipate or even design technologies that did not yet exist. Someone who had the strength of personality to challenge generals.

Strength of personality was not a problem for Land. He was quickly selected.

In 1954, Land proposed to Eisenhower the idea of a one-man aircraft with a powerful camera that would fly high and fast. He helped select the camera technology (Itek, Kodak) and the aircraft (Lockheed) for what became the world’s first spy plane—the U-2. The plane would play a critical role throughout the Cold War. It was U-2 pictures, for example, that identified Russian missiles in Cuba in 1962.

In 1957, Land and Killian proposed a new idea to Eisenhower. They were concerned about the risks of flying manned aircraft over hostile territory. (The concern was prophetic. In 1960, the Soviets shot down a U-2 plane flying over Russia and captured its pilot.) Land and Killian proposed that instead of flying manned aircraft over enemy land, the country should develop and deploy satellites carrying cameras with giant telescopic lenses pointed at the earth.

Snapping photos in space sounds like a good idea—but how would the photos get back to earth? Land and Killian recommended a system in which the satellites would eject the exposed film in canisters attached to a parachute. Air Force pilots would then fly planes with hooks that fished those canisters out of the sky.

Eisenhower green-lit the program. The president also accepted Land and Killian’s recommendation for a new agency, the National Reconnaissance Office, to be jointly managed by the Air Force and CIA.

As the Cold War and Soviet expansion continued, a limitation of the satellite program became increasingly clear. On August 20, 1968, the Soviet Union invaded Czechoslovakia. Satellite film clearly showed a large buildup of Soviet tanks and aircraft on the border before the invasion. But it was old news by the time the Air Force had retrieved the film: the invasion was already over.

Richard Nixon, elected in November, made it clear to his staff that he wanted real-time, not weeks-old, imaging, and he wanted it available by “his second term in office.” The scramble rapidly devolved into a bitter battle.

On one side: nearly the entire military leadership and most cabinet members, including the secretary of defense (Melvin Laird), his deputy (David Packard), the head of defense engineering (John Foster), the secretary of the Air Force (Robert Seamans), as well as a future secretary of defense (James Schlesinger) and secretary of state (George Shultz). The military advocated for an incremental solution: add scanners, like fax machines, to existing film satellites. Cameras would take pictures using ordinary film. Those photos would be scanned onboard and transmitted down to stations on earth. To them, the idea of filmless digital photography, using a CCD chip, was too far-fetched, too uncertain. Too much of a loonshot.

On the other side: Edwin Land.

Not surprisingly, the military was winning. By the spring of 1971, a $2 billion program for scanning films inside satellites was gaining steam. At a meeting of the president’s intelligence advisory board in April 1971, Land addressed the president directly. He told Nixon that the film-scanner idea was a “cautious step” and that digital technology was a “quantum jump which would give the US an unquestioned technological lead in this field.” Land said that bureaucrats “were unwilling to assume large financial risks without strong presidential backing.” He explained why digital would work, why the risks were manageable, and why the program was superior to the generals’ proposal.

Let’s pause: this was the spring of 1971. The first papers describing CCDs had been published only months earlier. Sony (which produced the first commercial digital camera), Canon, and Nikon had not even begun to work on digital photography. Land was advocating for digital before all of them.

In September, Nixon’s national security advisor, Henry Kissinger, informed all parties that the president had decided to proceed with Land’s quantum-jump solution. The military’s $2 billion program would be terminated. (A National Reconnaissance Office historian attributed Land’s eventual success to his “complete understanding of President Nixon’s desire to be remembered as a more forceful, incisive, and astute decision maker than his immediate predecessors.”)

On December 11, 1976, during what would have been the final days of Nixon’s second term (Nixon resigned in August 1974), the Air Force launched the first digital satellite, the KH-11. At 3:15 p.m. on January 21, the day after Jimmy Carter’s inauguration, the acting director of the CIA, Hank Knoche, met with Carter and his national security advisor, Zbigniew Brzezinski, in the Map Room of the White House. Knoche spread a handful of black-and-white photos across the table. They were the first live photos taken from space. They showed the president’s inauguration ceremony. Better than any words in a briefing document, the pictures explained Land’s quantum jump. The US could now look at events around the world “from right up close, virtually as they happened, the way an angel would.”

The availability of real-time visual intelligence changed how the US could respond to crises, direct national security operations, and verify arms-control treaties. The much higher sensitivity of CCDs, compared to film, provided images far exceeding anything possible with film-based satellites: they could record the license plate of a truck rather than the outline of a city. By many accounts, the three-hundred-plus digital imaging satellites launched by the NRO have proven to be the most valuable source of intelligence collected by the US over the past 60 years.

Land was not surprised by digital photography. He’d argued for that loonshot in front of the president of the United States. He did so before anyone else was even in the game. In a 1988 ceremony honoring Land, the director of the CIA, William Webster, declared, “The contributions Dr. Land has made to national security are innumerable, and the influence he has had on our present intelligence capabilities is unequaled.”

So what happened with Polaroid? Why didn’t Land jump on digital for his own company, exploit the head start from his national intelligence connections, and use those advantages to beat Sony, Canon, and Nikon to the punch?



Moses Trap: When ideas advance only at the pleasure of a holy leader, who acts for love of loonshots rather than strength of strategy

Toward the end of the wild Polavision launch at the Polaroid shareholders’ meeting, after Land had finished his presentation and thanked the red-hatted dancer, ushers directed audience members to 20 specially designed “film stations,” each equipped with mimes, dancers, and jugglers. Reporters and investors examined the machine, created three-minute instant-print films, and then returned to their seats for the question-and-answer session. Surrounded by dozens of happy performers, Land asked for questions. After some routine comments, an analyst asked, “What about the bottom line?”

Land answered with what became one of the most famous lines of his life: “‘The only thing that matters is the bottom line’? What a presumptuous thing to say. The bottom line’s in heaven.”

The familiar story of the decline of industry Goliaths begins with decades of success, after which the proud old company grows stale. It loses its hunger. A young upstart, a small David, comes along and slays the lumbering giant with an unexpected weapon. It’s a new idea or technology that everyone else overlooked. Some kind of loonshot.

The Goliaths built by Edwin Land, Juan Trippe, and—as we will see in the next chapter—Steve Jobs 1.0 don’t fit this picture. Land, Trippe, and Jobs were all master P-type innovators who never lost their hunger, their taste for bold, risky projects. Their Goliaths disappeared (or nearly disappeared, in the case of Jobs) because all three followed the same pattern into the same trap.

Each of those visionary leaders created a brilliant loonshot nursery; they achieved Bush-Vail rule #1: phase separation. But they remained judge and jury of new ideas. Unlike Bush and Vail, who saw their role as gardeners tending to the touch and balance between loonshots and franchises, encouraging transfer and exchange, those three master P-type innovators saw themselves as Moses, raising their staffs, anointing the chosen loonshot. In other words, they failed on Bush-Vail rule #2: dynamic equilibrium.

Let’s see what we’ve learned about the Moses Trap and how it seduces even the best of the best.

First: The dangerous, virtuous cycle builds momentum

P-type loonshots feed a growing franchise, which in turn feeds more P-type loonshots. New engines helped Trippe fly farther, faster, with more passengers. Which generated more income, which fed the design of bigger, faster engines. Instant black-and-white prints became instant color prints, which created massive popular demand, which funded the SX-70, with faster pictures, encouraging more photos, which fueled even greater expansion. Faster, better, more.

Second: The franchise blinders harden

Only those P-type loonshots that continue to spin the wheel matter. Trippe saw the new ways of doing business, the S-type loonshots from Bob Crandall and other large carriers, or from local discounters like Pacific Southwest Airlines—but he ignored them. Edwin Land not only saw digital, he dove deep on digital. But he ignored it, for his company, in favor of Polavision. Instant film continued to spin the instant-print wheel. Digital did not.

The P-type loonshot of digital photography took down Polaroid. But there was more to it. That new technology came with hidden S-type loonshots. Land, as we just saw, fully understood the technology. He jumped on its potential and defended the value of digital photography to the highest-ranking generals and political leaders before nearly anyone else in industry had heard of it.

Land and his management team dismissed digital because for 30 years they had made money from selling film: their cameras generated much less income than their instant-print cartridges. With digital, there was no film. “There’s no way that can make any money,” they said. Land dismissed the new technology because he didn’t look for the hidden S-type loonshots: all the ways digital could enable new streams of income. In other words, just like Juan Trippe, he leaned on his strong side—P-type loonshots—and didn’t watch his weak side: S-type loonshots.

Third: Moses grows all-powerful and anoints loonshots by decree

Bush and Vail managed the transfer rather than the technology . They cared for the touch and balance between loonshots and franchises. Land, on the other hand, was the “principal cheerleader and spokesperson” for the Polavision project.

One of Land’s admirers, who led various research groups at Polaroid during 20 years at the company, wrote about Land:

He was boss not only in the corporate sense, but also in the research area, and I suppose that became clearer as time went by. He was not only Chairman and CEO, but also held the title of Director of Research … which indicated where his true interest lay. His research decisions would always be governing, never mine.

Not long after the Polavision launch had failed and the product was terminated, Land brought a freelance lighting designer to visit a warehouse full of the instant-movie cameras. The designer asked why Land had brought him to see “this sad landscape.”

Land answered: “I wanted you to see what hubris looks like.”

Land and his Polavision machines

In chapter 1, we used the diagram on the next page to illustrate what Bush and Vail accomplished. They brought aging organizations, with proud franchises rapidly growing stale, to the top-right quadrant. Equally strong research and franchise groups (phase separation) continuously exchanged projects and ideas, with neither side overwhelming the other (dynamic equilibrium).

Land and Trippe succeeded in moving out of the bottom left, but only as far as the bottom right, straight into the Moses Trap.

Land walled off his loonshot nursery from the rest of the company. He banned Bill McCune, the company’s head of engineering, as well as anyone else not directly involved in his research, from his private ninth-floor lab. His loonshot nursery produced Nobel-caliber breakthroughs. The franchise group sold millions of cameras. But it was Land who completely controlled which loonshots would emerge, at what time, and under what conditions.

Shifting to the bottom-right quadrant postpones, but does not prevent, the phase transition and decline. A Moses can point to a loonshot and will it to life. But that magic only lasts so long before the wheel stops turning.

The Austro-Germanic school of fatalism (Spengler, Schumpeter) says that decline is inevitable. Empires will always ossify, a David will always rise to slay Goliath, and so it goes. Is that cycle of creative destruction truly inevitable? What’s an empire to do?

Bush and Vail understood that the doomsday cycle is not inevitable, and that the best chance for sustainable, renewable creativity and growth comes from bringing an organization to the top-right quadrant: separate phases connected by a balanced, dynamic equilibrium.

But how do we get there?



Escaping the Moses Trap

Buzz and Woody rescue a 747, invent the iPhone, and explain system mindset

“Steven P. Jobs is back,” declared the New York Times on October 13, 1988, describing the first product launch for Jobs’s new company, NeXT Inc. Three years earlier Jobs had parted ways in an ugly divorce with the company he had cofounded: Apple Computer.

Three thousand people gathered at Davies Symphony Hall in San Francisco to see Jobs unveil NeXT’s computer.

“I think together we’re going to experience one of those times that occurs only once or twice a decade in computing,” Jobs announces, opening the event. Jobs is wearing a boxy dark suit, skinny tie, and shaggy hair—a hyper-caffeinated fifth Beatle. “This is a revolution,” he says.

The article continues:

Mr. Jobs is known for his dramatic product introductions and he and his company took advantage of intense interest in the computer community about both him and his new machine.
He stood alone on a dark stage with just the computer and a vase of flowers, a huge screen behind him, and took the new machine through its paces. He demonstrated how it could record and send voice messages, play music with the quality of a compact disk and instantly retrieve quotations from the complete works of Shakespeare stored on its optical disk.

Wrapping up the two-hour demonstration of processors and ports and object-oriented programming, Jobs brings his long fingers together into Zen greeting position and pauses.

“One of my heroes has always been Dr. Edwin Land, the founder of Polaroid,” Jobs declares. “He said that he wanted Polaroid to stand at the intersection of art and science. We feel the same thing about NeXT. And of all the things that we’ve experienced together here today I think the one that strikes closest to the soul is the music.”

With that, Jobs introduces Dan Kobialka, a principal violinist of the San Francisco Symphony. Kobialka approaches the NeXT computer, playfully taps it with his bow, and begins a thundering five-minute duet. Machine joins man for Bach’s Violin Concerto in A Minor. When Kobialka finishes and looks up, a third spotlight rises on Jobs, holding a red rose. The crowd erupts in a standing ovation.

Replace the violinist with a red-hatted dancer, and it’s the Polavision launch.


The popular press gushed. A Newsweek cover declared that Jobs “put the ‘wow’ back in computers.” The Chicago Tribune noted that the launch event was “to product demonstrations what Vatican II was to church meetings.” Another headline simply read: “Eight megabytes of sexual satisfaction!” The launch also inspired competitive trash-talking. When asked if Microsoft would create software for the new machine, Bill Gates answered: “Develop for it? I’ll piss on it.” He dismissed the technology (“anybody can write Sony a check”) and the sleek, all-black design (“if you want black, I’ll get you a can of paint”).

Five months after the launch, NeXT announced a partnership with the largest computer retailer in the country, Businessland. David Norman, the president of the retailer, projected $150 million in sales in the first twelve months, an unprecedented figure. At a gathering of top Businessland staff, Jobs pumped them up to “kick the shit out of some people!” One attendee described a scene with Businessland sales managers soon after: “Picture grown, smart adults, standing on their chairs, screaming, they were so excited.”

To build the machines, Jobs insisted on a state-of-the-art, fully automated factory with art gallery walls and lighting, designer bathroom fixtures, and high-end leather furniture. One journalist described the factory as ready for a cover of Architectural Digest .

IBM and Apple sold millions of personal computers a year. Sun sold over 100,000 workstations a year. Jobs had designed his factory for billions of dollars in sales. Over the course of one year, Businessland sold fewer than four hundred NeXT machines.

Like Polavision or the Boeing 747, the NeXT Cube was a beautiful, technologically remarkable, wildly expensive machine—with no customers. The new optical drives had many times the memory of magnetic drives or floppy disks. But competitors offered more convenience, more useful applications, and lower costs. The summary of Polavision—“a product that has much more scientific and aesthetic appeal than commercial significance”—applied equally well to the NeXT computer.

“We saw some new technology and we made a decision to risk our company,” Jobs had announced at the launch event, speaking of the optical drives. Scott McNealy was the CEO at Sun, one of NeXT’s chief competitors. McNealy recognized that $10,000 machines were not impulse buys influenced by glitzy marketing events and sleek designs. The large customers that could afford them wanted practical machines, with swappable parts, using reliable hardware.

Jobs spoke of love of loonshots. McNealy acted on strength of strategy. Sun grew to over $3 billion in sales. Two years after the launch, NeXT’s retail partner, Businessland, went out of business. Its big bet on NeXT was not the only body blow, but it contributed.

By April 1991 two of Jobs’s cofounders at NeXT had resigned. In June, Ross Perot, NeXT’s largest individual investor, resigned from its board of directors, stating, “I shouldn’t have let you guys have all that money. Biggest mistake I made.” Over the next few months, the company borrowed from banks to make payroll. With NeXT on the edge of bankruptcy, Jobs went to its partner and largest investor, the Japanese company Canon, which manufactured both the computer’s optical drive and its printer. Canon wrote a check, and did so again two more times over the next year, before finally drawing a line. By early 1993, nearly all the vice presidents at the company, including all five of Jobs’s original cofounders, had left.

A Forbes article stated, “There are very few miracle workers in the business world, and it is now clear that Steve Jobs is not one of them.”


The facts of Jobs’s forced exit from Apple in 1985, and his path to the mess at NeXT, have been well laid out. In 1975, Steve Wozniak combined a microprocessor, keyboard, and screen into one of the earliest personal computers. Jobs convinced Wozniak to quit his job and start a company. After some initial success with their Apple I and II, however, competitors quickly passed Apple by. In 1980, Atari and Radio Shack (TRS-80) sold roughly seven times as many computers as Apple. By 1983, Commodore dominated the market, with the IBM PC, launched only two years earlier, a close second. Apple’s share had dropped to less than 10 percent and was shrinking rapidly.

Apple’s attempts to win back the spotlight with the Apple III and the Lisa, projects led by Jobs until he lost interest (in one case) or was kicked off (in the other), flopped. The legendary Super Bowl ad in early 1984 for a new Apple product, called the Macintosh, created tremendous publicity and an initial burst of sales. But the computer was painfully slow, had no hard drive, and frequently overheated (Jobs had insisted on no fan, to keep it quiet). In a year in which IBM and Commodore each sold over two million computers, Macintosh sales dwindled to less than ten thousand per month.

Even more dangerous to the company’s future than its string of failures, however, was its string of exits.

A stream of departing employees signals serious dysfunction. As mentioned earlier, after founding what became Bell Labs, Theodore Vail said that no group “can be either ignored or favored at the expense of the others without unbalancing the whole.” Vannevar Bush, during the Second World War, took every chance he could to emphasize his respect for the military, even as he spent nearly all his time with scientists like himself. Loving your loonshot and franchise groups equally, however, requires overcoming natural preferences. Artists tend to favor artists. Soldiers tend to favor soldiers.

Jobs proudly and publicly referred to his team, working on the Macintosh, as artists. He referred to the rest of the company, developing the Apple II franchise, as bozos. Apple II engineers took to wearing buttons with a circle and line running through an image of Bozo the Clown. Wozniak, an engineer with the demeanor of a teddy bear, was widely beloved at the company and in the industry. He resigned, openly complaining about the demoralizing attacks. Departures in the Apple II group became so common that one joke ran, “If your boss calls, be sure to get his name.” The toxicity spread. Key designers on the Macintosh side soon began leaving as well.

It didn’t take long for the Apple Board of Directors and its recently hired CEO, John Sculley, to conclude the dysfunction was not sustainable. Jobs was stripped of operating responsibility in the spring. Jobs discussed with them the idea of staying and creating a small unit to develop new technologies he’d heard about. Touchscreens. Flat-panel displays. A superpowerful graphics computer from a group of quirky engineers in Marin County, just north of San Francisco. In the end, however, Jobs decided to go. He resigned officially, to start NeXT, in September 1985.

The idea of a superpowerful graphics computer, however, stayed with him.

After Jobs left Apple, the remaining team, led by John Sculley, fixed the most glaring Macintosh flaws. They restored the fan, added a hard drive, and increased the memory (which improved speed). Sales turned around, and the product became a hit. Jobs was soon hailed, retroactively, as a master product innovator. He had created the Apple II and the Macintosh. He had brought personal computing, the graphical user interface, and the mouse to the masses. Playboy and Rolling Stone interviewed him. He made the covers of Time, Newsweek, and Fortune. One business magazine, Inc. , named him Entrepreneur of the Decade.

As NeXT began to struggle, even as Jobs’s star was rising, several employees at NeXT, as well as executives from Compaq and Dell, approached Jobs with an idea: get out of hardware. NeXT’s software was excellent. Its graphical interface and programming tools were more elegant and powerful than Microsoft’s DOS and early Windows. Jobs could offer PC makers an alternative to Microsoft, which they desperately wanted. In return, the PC makers could offer NeXT something it desperately needed: a future.

The idea of switching from hardware to software was a classic S-type loonshot. Jobs had risen to fame selling hardware. Bigger, faster, more, every year. The stars of the day—IBM, DEC, Compaq, Dell—sold shiny machines stamped with their famous logos. Everyone knew there was no money to be made in software; the money was in hardware.

And dozens of stories hailed Jobs as the master P-type innovator of his generation. Just like Edwin Land and Juan Trippe before him.

Abandon hardware? Not this Moses.

In fact, Jobs had already doubled down. Not long after he left Apple, Jobs got back in touch with the team of engineers in Marin County developing a graphics computer. Why bet on just one bigger, faster machine if you could have two? He bought their business and left them alone to build an even more powerful computer than NeXT.

Jobs had no idea that those engineers held the key to rescuing him from the Moses Trap. And it would have nothing to do with their machine.


Stories of great breakthroughs tend to coalesce around one person, one genius, and often one moment. Those stories are fun to tell and easy to digest. Occasionally they are true. More often, they contain a kernel of truth, but omit a much richer and more interesting picture.

Isaac Newton, for example, is often celebrated for discovering universal gravity, explaining the motion of the planets, and inventing calculus. But well before Newton’s Principia , it was Johannes Kepler who first suggested the idea of a force from the sun driving the motion of the planets, Robert Hooke who first suggested a principle of universal gravity, Christiaan Huygens who showed that circular motion generates a centrifugal force, many who used Huygens’s law to derive the now-familiar form of gravity, Giovanni Borelli who explained the elliptical motion of Jupiter’s moons using gravitational forces, John Wallis and others who created the differential mathematics Newton used, and Gottfried Leibniz who invented calculus in the form we use today. That story is harder to tell than the apple falling on Newton’s head.

Hooke suggested to Newton how gravity can explain planetary motion. Hooke’s suggestions launched Newton on the path to his masterpiece, Principia. Although Hooke suggested some of the initial ideas, he did not have the skills to create a complete system. Newton did. Newton was a great synthesizer, just as Jobs was a great synthesizer.

Isaac Newton had Robert Hooke. Steve Jobs had Jef Raskin. Robert Hooke, in his spare time, designed bat-like flying wings, developed sprung shoes to bounce around London in twelve-foot-high leaps, and investigated the uses of marijuana (“the Patient understands not, nor remembereth any Thing that he seeth … yet is he very merry”). Jef Raskin, in his spare time, designed and built remote-controlled plane kits, taught harpsichord, conducted an opera company, and filed patents on packaging design. Like Hooke, Raskin was a bit of a dabbler.

In 1967, Raskin, then a 24-year-old engineer, submitted a PhD thesis arguing that computers should have graphical interfaces and that their usability was more important than their efficiency. Both were radical ideas at the time, when monolithic mainframe computers dominated. In the early 1970s, Raskin ended up as a visiting researcher at Stanford and Xerox PARC. At PARC, he saw scientists create the first graphics-enabled personal computer, the Alto, with a bitmapped screen, a graphical interface, icons, and a mouse. (PARC failed to commercialize any of those technologies. For more on PARC as an example of how not to escape the Moses Trap, see the summary section at the end of this chapter.)

Raskin joined Apple in 1978, one year after Jobs and Wozniak started the company. Not long afterward, he launched a project to create an easy-to-use, inexpensive, graphics-enabled, small-footprint computer based on the Alto. He called it the Macintosh project. Jobs and others at Apple tried to terminate the project, so Raskin encouraged them to visit Xerox PARC and see for themselves. They did and were converted. Eventually Jobs shoved Raskin aside and took over the project.

Raskin launched the original Macintosh project and suggested some of its core ideas to Jobs. But he did not have the skills to develop those ideas into a complete system. Jobs did. Jobs was a great synthesizer.

Newton and Jobs also treated their precursors in a similar fashion. Newton tried to crush Hooke and bury his contributions (including, allegedly, losing the only known portrait of him). Newton described Hooke, in language that stuck for three centuries, as “a man of strange, unsociable temper.” Jobs described Raskin as “a shithead who sucks.”

In an interview after Jobs’s death, Bill Gates said, “Steve and I will always get more credit than we deserve, because otherwise the story’s too complicated.” He added, “But the difference between him and the next thousand isn’t like, you know, God was born and he came down from the hill with the tablet.” I believe Gates may be mixing Jesus and Moses metaphors. But his point was clear.

The richer stories do more than just correct cartoonish summaries—Newton discovered gravity; Jobs created the Mac—or humanize deities. The richer stories help us understand how the forces of genius and serendipity come together to produce great breakthroughs. The true histories, rather than the revisionist histories, contain the clues from which we learn how to make the forces of genius and serendipity work for us rather than against us.

“And he came down from the hill with the tablet”

The first of those clues, in the case of Steve Jobs, appeared 36 minutes into a 1976 film starring Peter Fonda and Blythe Danner.


Scene: Spaceship cockpit. Décor: 1970s. Lots of computers with blinking lights. Scientist 1, in white lab coat, strides into frame. Monotone computer voiceover: “Hyaline and synovial readouts recorded.”
Scientist 1: Status?
Scientist 2, seated, examining monitor: We’re completing the gross body series. We’ll start molecular studies in one hour.
1: All right. Did you alter their food?
2: Yes, sir, we should have 4 to 6 hours.
1: I want all thermal x-ray and electrochemical studies finished by tonight.
2: That’s not much time.
1: It’ll have to do. Our Mr. Browning is getting much too curious.
2: I have a holograph in my screen. Restructuring.
A translucent white three-dimensional image of a left hand appears, fingers extended upward, and slowly rotates. The three leftmost fingers curl down. Then the wrist curls down, the thumb tucks in, and the hand rotates until the forefinger points directly out of the screen, at you, the viewer.

In 2011, the Library of Congress selected this clip as one of 25 to be added to the National Film Registry. Not the film itself, Futureworld —which somehow brought together, in one movie, sex robots, scenes of medieval jousting, and Yul Brynner dressed like a gay cowboy—but the three-dimensional hand. The rotating hand was the first 3D computer-generated image to appear in film. It was made by a physics major turned computer graphics programmer at the University of Utah named Ed Catmull.

Academic disciplines tend to flower on different campuses at different times, like flash mobs. In the 1970s, a mob of young computer graphics pioneers flashed on the campus of the University of Utah: Jim Clark, who would go on to create Silicon Graphics; Nolan Bushnell, who would start Atari; John Warnock, who would create Adobe; and Alan Kay, who would help create the first graphics-enabled personal computer, the Alto, at Xerox. Joining them was Catmull, a mild-mannered Mormon graduate student who would cofound the greatest animated film company of his time.

At Utah, Catmull created the 3D hand for a class project. He and his thesis advisor, a graphics pioneer named Ivan Sutherland, took it to Disney. Walt Disney had been a boyhood idol for Catmull, who had dreamed of becoming a Disney animator. Catmull approached the animation building like visiting a shrine. Disney, however, passed on his technology. And there would be no animation job offer for Catmull.

Over the next decade, Disney, an empire built on animation, would dismiss a remarkable string of animation technologies invented by the Utah graphics alumni; just as Xerox, an empire built on office productivity, would dismiss a remarkable stream of loonshots that transformed office productivity, invented by its subsidiary Xerox PARC.

Meanwhile, Catmull had finished his PhD and needed a job. He had invented an important mathematical tool for mapping images and textures onto objects: it could project a picture of Mickey Mouse, say, onto the surface of a tennis ball. He had created the first 3D animated image to appear in film. But no one, it seems, was interested. Catmull was 29, married, and had a two-year-old son. He ended up in Boston at a computer software company.

Until a man called about a tuba.


In the 1960s, Alex Schure, a fast-talking, eccentric millionaire, acquired a handful of mansions near the northern coast of Long Island, New York, and turned them into the campus of a private trade school he called the New York Institute of Technology. The school was intended, initially, for people who couldn’t enroll anywhere else. Since many of his students needed remedial help in math, Schure hired a comic-book artist to draw their math lessons. That went well so he hired animators to convert those cartoons into a film. The film won a gold medal in the New York International TV film festival. As eccentric millionaires with one success are inclined to do, Schure concluded he was an expert, a proven filmmaker. He would write, direct, and produce his next project. He called it Tubby the Tuba.

Schure hired a hundred animators to begin work on Tubby , but he soon realized that drawing each frame by hand was a tedious, painstaking process. A search for better technologies for Tubby led him to Utah, which led him to Catmull, and a phone call. Would Catmull accept a large amount of money to set up an independent research lab, hire a team, buy whatever equipment they needed, no strings attached, just develop great animation technology? Catmull quit his job and joined Schure in Long Island.

One of Catmull’s first hires was Alvy Smith, a big, long-haired Texan with a PhD in computer science. Smith had taught at New York University for five years, then decided to leave academia and move to Berkeley, California, with no plans. He eventually found his way to Xerox PARC, where he worked on color displays and graphics software (the first computer painting tool was also developed at PARC). Less than a year later, however, Xerox terminated the project and let him go. His supervisor explained, “Color is not a part of the office of the future.”

By then, Smith was hooked on the potential of computer graphics. He was desperate to find a way back in. He soon heard about “a madman in Long Island” building a lab. Smith spent the last money he had on a plane ticket, visited NYIT, and was immediately hired. The Utah Mormon and the Texas hippie settled into a garage—the converted two-story, four-car garage of the former Vanderbilt-Whitney estate—and began building the most advanced computer graphics lab in the country. It marked the beginning of a computer graphics dynasty, “a marriage of the house of Xerox and the house of Utah,” Smith wrote.

In the spring of 1977 at a private theater in Manhattan, Alex Schure proudly unveiled his finished film to his team. At the end of the screening, one of the film’s animators quietly said, “Oh God, I’ve wasted two years of my life.” Catmull described the film as a train wreck. The production was amateurish. Catmull and Smith saw that Schure had no instinct for story or character. They recognized that Schure would be no Walt Disney, and that computer-generated film to rival live action, their dream, would never emerge from the Vanderbilt-Whitney garage.

Tubby the Tuba

Fortunately, not long afterward, another mogul looking for better technologies for his movies called. His film had premiered one week after Tubby , and he was already working on a highly anticipated sequel. But drawing light sabers by hand, frame by frame, duel by duel, was taking too damn long.

A giant swooshing sound could be heard along the coast of Long Island as the graphics group abandoned the maker of Tubby for the idol of geeks everywhere, the maker of Star Wars. The team relocated to a nondescript office building in Marin County, California, home to George Lucas’s film production operation. Over the next five years, the Lucasfilm Computer Division, as they were soon known, originated much of the software and hardware that has transformed filmmaking over the past forty years: 3D rendering, digital editing, optical scanning, laser film printing, and, of course, astonishingly realistic computer-generated imagery, CGI. As a teenager, I saw the first scene they made that was used in a feature film: the genesis effect in Star Trek II: The Wrath of Khan (1982). It almost made up for the tears I shed for Spock.

The powerful graphics computer built by the Lucasfilm group to create these effects needed a name. Smith suggested “Pixer,” for pixel + laser. A colleague in the graphics group suggested something more high-tech, like radar, or astronomical, like quasar or pulsar. They converged on the Pixar Image Computer, which was soon called simply the PIC.

In 1985, while Steve Jobs was in the middle of his prolonged unhappy exit from Apple, a colleague, Alan Kay, suggested that he look at the PIC. Kay had been one of the early personal computer pioneers at Xerox PARC before joining Apple. Kay had overlapped with Catmull at Utah and Smith at Xerox. He had heard from them that Lucas, who had recently divorced and needed cash, was looking to sell the group.

Jobs had been dreaming up plans for NeXT, but suddenly there was PIC. It was big, fast, powerful, and enormously sexy (the group worked with both George Lucas and Steven Spielberg). It was also very expensive: a $100,000 machine. In the fall of 1985, in an interview with the Wall Street Journal , Lucas explained the many potential uses for the PIC: imaging in radiology, oil and gas exploration, automobile design, and so on. “The movie business turns out to be a minuscule market compared to others we now find ourselves involved with,” Lucas said.

“It’s like we designed this very sophisticated race car capable of doing all sorts of amazing and complex feats on the race track and then come to find out that a huge segment of the population wants to use it to commute to work.” It was a good story, from a legendary storyteller, looking to unload a business.

Jobs was convinced. First, he tried to interest Apple board members in acquiring PIC for the company. They rejected the idea. Later that summer, as his relationship with Apple disintegrated, Jobs proposed to Catmull and Smith that he acquire their operation and run it. Listening to Jobs, Catmull recalled, it became clear that “his goal was to build the next generation of home computers to compete with Apple.” They had no interest in that fight, and declined. Catmull, who had recently separated from his wife, picked up on the bitterness. He told Smith, “We don’t want to be the first woman after the divorce.”

Toward the end of 1985, after nearly two dozen firms (including Disney) had passed, the president of Lucas’s studio, Doug Norby, decided he would shut down the computer group at the end of the year if they couldn’t find a buyer. Fortunately, in November, Catmull and Smith convinced the Dutch electronics company Philips, which wanted the medical imaging applications, and the auto manufacturer General Motors, which wanted the computer-aided design business, to jointly purchase the graphics group. One week before signing, however, the deal fell through. Ross Perot, the head of the computer division at GM, had spearheaded the drive to acquire the graphics computer. Right around the time of the deal, GM announced the acquisition of Hughes Aircraft for $5.2 billion. Perot was furious and insulted the board, both privately and publicly: “How could GM justify spending billions on a communications satellite operation when it couldn’t even build a reliable car?” In return, the GM board members withdrew their support for his computer deal (the following year they got rid of Perot).

Jobs heard about the buyout falling through. He called Norby and said he was still interested. Jobs needed to convince not only Lucas’s studio chief to sell the group to him at a reasonable price, but also Catmull and Smith to continue the project, working for him. By then, Jobs had started NeXT. He told Catmull and Smith that they could run their own show. They could stay in Marin County, a few hours north of Jobs and NeXT. Catmull would be the CEO. Catmull and Smith, who were out of options at that point, agreed. Norby accepted Jobs’s lowball offer to buy the whole unit.

And so Jobs became the principal investor, and largest shareholder, of the Lucasfilm Computer Division, which was renamed Pixar, Inc.

“Look What Steve Jobs Found at the Movies!” read the BusinessWeek headline.


Jobs bought the group for its big computer. “Image computing will explode during the next few years, just as supercomputing has become a commercial reality,” he said in announcing the purchase. “This whole thing has the same flavor as the PC industry in 1978.” Jobs terminated Pixar’s one ongoing film project, directed the group to open PIC sales offices in seven cities, and added hardware sales staff, growing the company from 40 employees to 140.

After two years, fewer than two hundred machines had been sold. The promise of PIC turned out to be more fantasy than fact. Much of the CGI work could be accomplished using Pixar software on less expensive and more versatile workstations, like those made by Sun or Silicon Graphics. The PIC hardware wasn’t needed. In 1986, to highlight the potential of computer-generated animation, Pixar created their famous Luxo Jr. clip. Disney’s head of animation said that “Luxo the lamp had more emotion and humor in a five-minute short than most two-hour movies.” The clip, now part of the Pixar logo, was made using workstations rather than the PIC.

Like NeXT, like Polavision, like the Boeing 747, the PIC was a beautiful, turbo-powered, wildly expensive machine—with no customers. Once again, love of loonshots had triumphed over strength of strategy, just as it had with Juan Trippe and Edwin Land. Only Jobs, unlike the other two, had doubled down on the Moses Trap.

After two more years and over $50 million invested, Jobs finally pulled the plug on the PIC. In April 1990, Pixar sold its hardware business to a California-based technology company, Vicom Systems. Vicom went out of business soon afterward. Jobs shrank the company back down to 40 or so employees, laying off all the people he had insisted the company hire.

Pixar was crumbling; NeXT was floundering, and Jobs was finally running low on cash. Jobs tried to shut down the animation group at Pixar—only five employees—but Catmull and his team resisted. Jobs tried to sell Pixar, but he couldn’t find a buyer at acceptable terms. Later he described that time as being in “ankle-deep shit.” He stayed home rather than go to work.

Years ago, when I was feeling down about bad news my company and I had just released, an advisor, who had retired from several decades of running a large public company, put his arm around my shoulder and said, “Some days you’re the dog. Some days you’re the fire hydrant.” For Jobs, these were fire-hydrant years.

In the world of biotech, struggling startups often try to buy time by selling tools and services to their much larger, richer cousins, the big pharmas. The goal is to survive long enough for the internal team to create a product—a strikingly original drug candidate.

And that’s exactly what Pixar did in the world of film. It sold tools and services to a much larger, richer cousin—Disney—and survived long enough for the internal team to create a strikingly original product.

With Pixar, however, Jobs gained not only time—and a product he never expected—but an idea. He found a different way to nurture loonshots.


On the evening before Thanksgiving 1995, at the El Capitan Theatre in Los Angeles, the lights dimmed and curtains rose on an animated toy astronaut named Buzz Lightyear and a toy cowboy named Woody. Pixar’s Toy Story , the industry’s first fully computer-generated feature film, became the highest-grossing film in the country for three weeks in a row. The film still has a 100 percent rating on Rotten Tomatoes. Reviews at the time described it as “visually astounding,” “the rebirth of an art form,” “the dawn of a new era.” Conceived and directed by John Lasseter, the same artist behind the Luxo Jr. clip a decade earlier, the film began Catmull and Lasseter’s reign as the greatest animators since Walt Disney and made Jobs, who had previously tried to get rid of the animation unit, suddenly very interested in this new art form. The success came with another side effect: it made Jobs a billionaire.

The film was the culmination of a ten-year relationship with Disney. While still at Lucasfilm, Catmull and Smith had convinced Disney to purchase a handful of PICs to automate animation. Disney saw Pixar’s short clips win standing ovations at graphics conventions and concluded the team could make a feature film. In 1991, after failing to hire Lasseter away from the group, Disney signed a three-picture deal with Pixar. Toy Story was the first of the three.

In the months leading up to the premiere, Jobs worked with bankers to prepare Pixar for an initial public offering of stock. IPO preparation consists, among other things, of drafting a prospectus, a document distributed to investors that describes the company. The cover of Pixar’s prospectus featured a smiling Buzz Lightyear leaping out of a computer monitor. I’ve drafted many prospectuses and participated in many offerings. None had a giant image of a lovable action toy on the cover. None was as perfectly timed.

Pixar’s offering, one week after the film premiere, exploded into an investor frenzy. The stock began trading at 250 percent above the bankers’ initial estimates. By end of day the company was valued at $1.5 billion. Jobs owned 80 percent. His stake was worth $1.2 billion. Not long before, his ability to continue to support any of his ventures had been in serious doubt.

Earlier I mentioned that Newton and Jobs were great synthesizers. Newton brought together planetary astronomy, laws of motion, differential mathematics—ideas developed by others—and synthesized them into a coherent whole the world hadn’t seen. Jobs brought together design, marketing, and technology into a coherent whole, as few others could do. But he was missing a key ingredient. Like Land before him, who brought similar skills together, Jobs had led only as a Moses.

Which is why the most valuable gift that Jobs received—from the perspective of Apple product lovers today—was not the financial reward of his Pixar investment. It was seeing the Bush-Vail rules in action. He learned a different model for leading, for how to nurture loonshots and grow franchises while balancing the tensions between the two.

That missing ingredient became the key to his third act, when he returned to hardware and revived his previous company—along with the entire American consumer electronics industry.


Pixar’s story has a great plot: a small, struggling company, dismissed by nearly all the major players in the industry, is saved by a partnership. The partnership produces an industry-transforming hit. The hit launches a wildly successful public offering. The offering finances a staggering run of new hits: Monsters, Inc., Finding Nemo, The Incredibles, Cars, Ratatouille, Wall-E, Up, Inside Out , and others.

The Pixar story is a marvelous remake. Fifteen years earlier, in 1978, a tiny, profitless company called Genentech, developing an unproven new technology called genetic engineering, which was dismissed by nearly all the incumbent players in the industry, signed a partnership with a large pharma company. Pixar’s technology automated a manual process and allowed animators to create a new kind of film. Genentech’s technology automated a manual process and allowed scientists to create a new kind of drug.

Genentech’s public offering was perfectly timed and beautifully marketed, just like Pixar’s. The wildly oversubscribed offering closed on October 14, 1980. The stock began trading at 200 percent above bankers’ initial estimates. Pixar’s IPO marked the birth of a new art form. Genentech’s IPO marked the birth of a new industry—the biotechnology industry. The successful offering financed a staggering run of hits: Herceptin (for breast cancer), Avastin (for colon, lung, and brain cancers), Rituxan (for blood cancers).

Both Genentech and Pixar—like any good drug-discovery company or film studio—learned how to balance both loonshots and franchises because they had to. There are no other kinds of products in movies and drugs.

In the biotech world, probably no company did it better than Genentech. In 2009, when it was sold to Roche, the company was valued at just over $100 billion. In the film world, probably no studio did it better than Pixar. From 1995 through 2016, Pixar released 17 feature-length films. The films averaged over half a billion dollars in gross sales each. Their median Rotten Tomatoes score is an astounding 96 percent.


Ed Catmull, from Pixar, refers to early-stage ideas for films—loonshots—as “Ugly Babies.” The language is new, but the idea goes back centuries. In 1597, the philosopher Sir Francis Bacon wrote, “As the births of living creatures are at first ill-shapen, so are all Innovations, which are the births of time.” Here is Catmull describing the need to maintain the balance between loonshots and franchises—“the Beast”—in film:

Originality is fragile. And, in its first moments, it’s often far from pretty. This is why I call early mock-ups of our films “Ugly Babies.” They are not beautiful, miniature versions of the adults they will grow up to be. They are truly ugly: awkward and unformed, vulnerable and incomplete. They need nurturing—in the form of time and patience—in order to grow. What this means is that they have a hard time coexisting with the Beast.…
When I talk about the Beast and the Baby, it can seem very black and white—that the Beast is all bad and the Baby is all good. The truth is, reality lies somewhere in between. The Beast is a glutton but also a valuable motivator. The Baby is so pure and unsullied, so full of potential, but it’s also needy and unpredictable and can keep you up at night. The key is for your Beast and your Babies to coexist peacefully, and that requires that you keep various forces in balance.

Keeping the forces in balance is so difficult because loonshots and franchises follow such different paths. Surviving those journeys requires passionate, intensely committed people—with very different skills and values. Artists and soldiers.

The many rejections of the first James Bond movie, Dr. No , for example, are typical for original films, just as the convoluted history of the first statin is typical for breakthrough drugs. Bond was too British for American studios; Fleming’s novels were “not even good enough for television.” Fleming gave up trying to sell the film rights to studios after a decade or so—nine novels into his series—and granted the rights to a pair of dubious producers. One had just bankrupted his production company. The other had limited film experience. The partnership did not start well. The first writers changed the villain of Dr. No to a high-IQ monkey, perched on a henchman’s shoulders. A later writer was so pessimistic about the final script (sans monkey) he insisted his name be removed. Half a dozen stars rejected the lead before the producers settled on a barely known 32-year-old named Sean Connery (prior movies: Tarzan’s Greatest Adventure, Darby O’Gill and the Little People ). Connery had driven milk trucks before acting. The distributor doubted they could sell the picture in major US cities because there was “a Limey truck driver playing the lead,” so it opened the film at drive-in theaters in Oklahoma and Texas.

Bond battles an evil monkey

Developing the twenty-sixth Bond movie or the tenth statin is an entirely different experience. Actors compete for roles in Bond #26; studios line up for marketing rights; cash pours in. We now understand that a British spy wearing Brioni can sell tickets, just as we know that the next statin can lower cholesterol. There can be bumps in the road—Baycol (the sixth statin) was withdrawn due to unexpectedly high toxicities; Timothy Dalton (the fourth Bond) happened—but the directions are clear. Bond needs a bad guy, a fast car, a smooth drink, a double-crossing damsel, and a few double entendres. Follow-on drugs need to clear a known list of safety and efficacy hurdles. Franchise projects are easier to understand than loonshots, easier to quantify, and easier to sell up the chain of command in large companies. The challenge for these sequels and follow-ons is not in making it through the long dark tunnel of skepticism and uncertainty. Their challenge is in exceeding what came before.

Bond and the statins, of course, survived those challenges just fine. The Bond films became the most successful film franchise in history. The statins became the most successful drug franchise in history.

Inventors or creatives championing loonshots are often tempted to ridicule franchises—as Steve Jobs 1.0 did with the “bozos” developing Apple II follow-ons. But both sides need each other. Without the certainties of franchises, the high failure rates of loonshots would bankrupt companies and industries. Without fresh loonshots, franchise developers would shrivel and die. If we want more Juno s and Slumdog Millionaire s, we need the next Avengers and Transformers . If we want better drugs for cancer and Alzheimer’s, we need the next statin.

Pixar, as Catmull and others have described, created an environment well known for what we would call phase separation and dynamic equilibrium, for nurturing loonshots while maintaining a balance between loonshots and franchises. But perhaps the most interesting lesson that was readily visible at Pixar, key to escaping the Moses Trap, was the difference between two ways of leading, which I’ll call system mindset and outcome mindset .

For the clearest explanation of this difference, let’s turn to a board game.


Garry Kasparov reigned as world chess champion for fifteen years, the longest record in the history of the game. He ranks, on many lists, as the greatest chess player of all time. The difference between system and outcome mindset is a principle I adapted from his book How Life Imitates Chess. Kasparov describes this principle as key to his success.

We can think of analyzing why a move is bad—why pawn-takes-bishop, for example, lost the game—as level 1 strategy, or outcome mindset. After a bad move costs him a game, however, Kasparov analyzes not just why the move was bad, but how he should change the decision process behind the move . In other words, how he decided on that move, in that moment, in the context of that opponent, and what that means for how he should change his decision-making and game-preparation routine in the future. Analyzing the decision process behind a move I’ll call level 2 strategy, or system mindset.

Garry Kasparov

The principle applies broadly. You can analyze why an investment went south. The company’s balance sheet was too weak, for example. That’s outcome mindset. But you will gain much more from analyzing the process by which you arrived at the decision to invest. What’s on your diligence list? How do you go through that list? Did something distract you or cause you to overlook or ignore that item on the list? What should you change about what’s on your list or how you conduct your analyses or how you draw your conclusions—the process behind the decision to invest—to ensure that mistake won’t happen again? That’s system mindset.

You can analyze why you argued with your spouse. It was, let’s say, your comment about your spouse’s driving. But you may improve marital relations even more if you understand the process by which you decided it was a good idea to offer that comment. What state were you in and what were you thinking before you said it? Are there some different things you might do when you are in that state and think those thoughts? How good would it feel to sleep in your own bed?

Let’s apply the same principle to organizations. The weakest teams don’t analyze failures at all. They just keep going. That’s zero strategy.

Teams with an outcome mindset, level 1, analyze why a project or strategy failed. The storyline was too predictable. The product did not stand out enough from competitors’ products. The drug candidate’s data package was too weak. Those teams commit to working harder on storyline or unique product features or a better data package in the future.

Teams with a system mindset, level 2, probe the decision-making process behind a failure. How did we arrive at that decision? Should a different mix of people be involved, or involved in a different way? Should we change how we analyze opportunities before making similar decisions in the future? How do the incentives we have in place affect our decision-making? Should those be changed?

System mindset means carefully examining the quality of decisions, not just the quality of outcomes. A failed outcome, for example, does not necessarily mean the decision or decision process behind it was bad. There are good decisions with bad outcomes. Those are intelligent risks, well taken, that didn’t play out. For example, if a lottery is paying out at 100 to 1, but only three tickets are sold, one of which will win, then yes, purchasing one of those three tickets is a good decision. Even if you end up holding one of the two that did not win. Under those same conditions, you should always make that same decision.

Evaluating decisions and outcomes separately is equally important in the opposite case: bad decisions may occasionally result in good outcomes. You may have a flawed strategy, but your opponent made an unforced error, so you won anyway. You kicked the ball weakly toward the goalkeeper, but he slipped on some mud, and you scored. Which is why probing wins, critically, is as important, if not more so, as probing losses. Failing to analyze wins can reinforce a bad process or strategy. You may not be lucky next time. You don’t want to be the person who makes a poor investment, gets lucky because of a bubble, concludes he is an investment genius, bets his fortune, and then loses it all next time around.

At Pixar, Catmull probed both systems and processes, after both wins and stumbles. How should the feedback process, for example, be adjusted so a director is given the most valuable possible input, in a form most likely to be well received? Artists tend to hate feedback from suits or marketers or anyone outside their species, but they welcome it from thoughtful peers. So at Pixar, every director receives private feedback on their project from an advisory group of other directors—and, in turn, serves on similar groups for other directors. And more: How might a director’s incentives distort their decision-making process on budgets, timelines, and quality? How should those distortions be countered? What filmmaking habits are in place, for outdated reasons, that might be unnecessary or counterproductive today?

Like Vannevar Bush, who insisted, as described in chapter 1, that he “made no technical contribution whatever to the war effort,” Catmull saw his job as minding the system rather than managing the projects .

That message got through to Jobs. Jobs had a role in the system—he was a brilliant deal-maker and financier. It was Jobs, for example, who insisted on timing the Pixar IPO with the Toy Story release, and Jobs who negotiated the Pixar deals with Disney. But he was asked to stay out of the early feedback loop on films. The gravity of his presence could crush the delicate candor needed to nurture early-stage, fragile projects. On those occasions he was invited to help near-finished films, Jobs would preface his remarks: “I’m not a filmmaker. You can ignore everything I say.” Jobs had learned to mind the system, not manage the project.

Relinquishing control of a creative project and trusting in the inventor or artist or any other loonshot champion is not the same as relinquishing attention to detail. The chief executive at Genentech for fourteen years, Art Levinson, was famous—and feared—for his insistence on scientific precision. A few years ago, at the largest annual biotech meeting, Levinson strode onto the stage to give his keynote presentation, pointed at the conference organizer’s logo behind him—a giant image of a DNA helix—and said, “This is a left-handed helix. It doesn’t exist in nature.” (DNA molecules are right-handed helices.) The crowd roared. We are a major industry, he explained, we should get our DNA right. He sent a message that inspired every scientist in the room. Science matters. Precision matters.

I often heard stories from scientist and manager friends at Genentech about Levinson. How he would call a junior technician in the lab, for example, and grill him on his data. Levinson and the early founders of Genentech understood, like Bush and Vail, and Catmull decades later, the need to tailor the tools to the phase. Ferocious attention to scientific detail—or artistic vision or engineering design—is one tool, tailored to the phase, that motivates excellence among scientists, artists, or any type of creative.

Left-handed vs. right-handed DNA

Genentech achieved the highest levels of respect from the scientific community. It ranked behind only MIT in the number of citations per paper. It did so without sacrificing excellence in franchise. It not only developed four of the most important cancer drugs of the past twenty years but it also overcame the nearly impossible manufacturing challenges of growing them from live organisms, in a lab, to deliver to millions of patients around the world. The company translated that scientific and manufacturing expertise into products generating over $10 billion in annual sales. It did so, in large part, by balancing loonshots and franchises extraordinarily well.

In April 2000, three years after Steve Jobs returned to Apple, he invited Art Levinson to join his new board of directors. After Jobs passed away in 2011, Levinson replaced him as chairman of Apple.


The well-told story of Jobs’s return to Apple and its subsequent rise to the most valuable company in the world is a remarkable example of nurturing loonshots, in a race against time, to rescue a franchise in crisis. But it should be, by now, a familiar example.

Vannevar Bush arrived in Washington to rescue a franchise lagging far behind in a technology race, just months before the start of a world war. Bush’s system helped create not only the dominant military force in the world (as we will see in chapter 8) but also the dominant national economy. Theodore Vail returned to AT&T to rescue a franchise in crisis after its telephone patent had expired and competitors were clawing at its lead. Vail’s system not only transformed AT&T into the country’s most successful business but also produced Nobel Prize–winning discoveries that launched the electronics age.

In Apple’s case, the rescue operation began in December 1996, when Apple announced the acquisition of NeXT and Jobs’s appointment as an advisor. It was the company’s last-gasp attempt to save itself. Apple’s operating system and machines were outdated. Three prior operating system overhauls, intended to compete with Microsoft Windows, had failed. Market share had plunged below 4 percent. Massive financial losses and heavy debts had pushed Apple to the edge of bankruptcy. The board tried, and failed, half a dozen times to find a buyer for the company. Elevating Jobs first to interim CEO in mid-1997 and then to full-time CEO in early 1998 was viewed as a Hail Mary play, and one with a particularly small chance of saving the company. The many failed promises of NeXT had reduced Jobs’s credibility as a technology leader in the eyes of industry analysts and observers.

When Jobs finally took over, gone was the dismissive attitude toward soldiers. In March 1998, he hired Tim Cook, known as the “Attila the Hun of inventory,” from Compaq to run operations.

Also gone were the blinders to S-type loonshots. For example, by 2001 music piracy on the internet was rampant. The idea of an online store selling what could easily be downloaded for free seemed absurd. And no one sold music online that customers could keep on their own computers (online music, at the time, was available only through subscription: monthly fees for streaming songs). Plus one more nutty thing: no one sold individual songs, at 99 cents each, rather than whole albums. “You’re crazy,” anyone could have told Jobs. “There’s no way that could make any money.”

The idea didn’t seem so crazy after one million songs were downloaded from the iTunes store in the first six days. There were no new technologies. Just a change in strategy that no one thought could work.

Apple’s P-type loonshots, of course, transformed their industries: the iPod, the iPhone, and the iPad. But what ultimately made them so successful, aside from excellence in design and marketing (most, although not all, of the technologies inside had been invented by others), was an underlying S-type loonshot. It was a strategy that had been rejected by nearly all others in the industry: a closed ecosystem.

Many companies had tried, and failed, to impose a closed ecosystem on customers. IBM built a personal computer with a proprietary operating system called OS/2. Both the computer and the operating system disappeared. Analysts, observers, and industry experts concluded that a closed ecosystem could never work: customers wanted choice. Apple, while Jobs was exiled to NeXT, followed the advice of the analysts and experts. It opened its system, licensing out Macintosh software and architecture. Clones proliferated, just like Windows-based PCs.

When Jobs returned to Apple, he insisted that the board agree to shut down the clones. It cost Apple over $100 million to cancel existing contracts at a time when it was desperately fighting bankruptcy. But that S-type loonshot, closing the ecosystem, drove the phenomenal rise of Apple’s products. The sex appeal of the new products lured customers in; the fence made it difficult to leave. Just like the failure of Friendster prior to Facebook, or the failure of cholesterol-lowering drugs and diets prior to Endo’s statins, or the failure of the Comets before the Boeing 707, IBM’s failure with OS/2 had been a False Fail.

In rescuing Apple, Jobs demonstrated how to escape the Moses Trap. He had learned to nurture both types of loonshots: P-type and S-type. He had separated his phases: the studio of Jony Ive, Apple’s chief product designer, who reported only to Jobs, became “as off-limits as Los Alamos during the Manhattan Project.” He had learned to love both artists and soldiers: it was Tim Cook who was groomed to succeed him as CEO. Jobs tailored the tools to the phase and balanced the tensions between new products and existing franchises in ways that have been described in many books and articles written about Apple. He had learned to be a gardener nurturing loonshots, rather than a Moses commanding them.

“The whole notion of how you build a company is fascinating,” Jobs told his biographer, Walter Isaacson. “I discovered that the best innovation is sometimes the company, the way you organize.”

Jobs arrived at the same conclusion that the military historian James Phinney Baxter did half a century earlier, reflecting on the success of Bush’s system in turning the course of World War II: “If a miracle had been accomplished anywhere along the line,” Baxter wrote, “it was in the field of organization, where conditions had been created under which success was more likely to be achieved in time.”



The example of Xerox PARC has appeared several times above. Before pulling together what we’ve learned from the five stories in the five chapters of part one, it’s worth briefly touching on what happened at PARC. It highlights another side—the inverse—of the Moses Trap.

In 1970, Xerox was a shining symbol of innovation. It was the first company, before Apple, to reach a billion dollars in sales in less than ten years from introducing a single technology—the photocopier. But by 1970, the photocopier franchise had matured, so the leaders of Xerox decided to create a separate unit, in Palo Alto, California, far away from its headquarters in New York and its manufacturing division in Texas, to explore new technologies. They called it the Palo Alto Research Center. PARC attracted the best and the brightest. Engineers at PARC would end up winning many of the most prestigious awards in computer science and founding, or joining and transforming, many of the earliest pioneering computer companies (including Apple).

During the 1970s, engineers at PARC invented the first graphics-enabled personal computer (the Alto), the first visual-based word processor, the first laser printer, the first local networking system (ethernet), the first object-oriented programming language, and a half dozen other firsts. It was an incredible run. But none of those breakthroughs was commercialized by Xerox.

“Some companies are the equivalent of an innovation landfill,” wrote one senior Apple executive, who helped lure some of PARC’s best engineers to Apple. “They are garbage dumps where great ideas go to die. At PARC, the key development people kept leaving because they never saw their products get into the market.”

One of the Alto project leaders at PARC gradually realized it was the company’s “structure, not cost estimates or technological visions,” that was driving apart the loonshot group at PARC and the franchise group in Texas, the one that made typewriters and other office machines. The Texas group “had to sandbag the Alto III, because with it they wouldn’t make their numbers and therefore wouldn’t get their bonuses. It would have been an absolutely impossible burden on them to be successful in making typewriters and also introduce the world’s first personal computer. And they should never have been asked to do it that way. So it was shot down.”

In other words, as mentioned earlier, the weak link is not the supply of ideas. It is the transfer to the field. And underlying that weak link is structure—the design of the system—rather than the people or the culture.

PARC was an example of the opposite of the Moses Trap. Phase separation succeeds brilliantly. But rather than loonshots commanded out of the group by a Moses, loonshots are ignored or actively quashed (“Color is not part of the office of the future”) and never emerge.

PARC was far from the only example. The PARC Trap—loonshots stay parked, and never leave—is common. In 1975, for example, Steve Sasson at Kodak’s research lab developed one of the earliest digital cameras. Kodak buried it for a decade.

Well-intentioned leaders can create a high-performing, quarantined research group, as Xerox’s leaders did, with an environment well suited to creativity and invention. The organization moves out of the bottom-left “stagnation” quadrant below, to the right. But there will always be resistance, in the franchise groups, to new ideas, just as the US military resisted, at first, many of the technologies emerging from Vannevar Bush’s group.

Getting the touch and balance right requires a gentle helping hand to overcome internal barriers—the hand of a gardener rather than the staff of a Moses. If the transfer is either overforced (a thunderous commandment) or underforced (no helping hand), promising ideas and technologies will languish in the labs. The organization will lose the technologies, it will lose the race against time, and it will lose the loyalty of its inventors, who won’t stay around for long.

The stories in part one illustrate the first three Bush-Vail rules:

1. Separate the phases
• Separate your artists and soldiers
• Tailor the tools to the phase
• Watch your blind side: nurture both types of loonshots (product and strategy)
2. Create dynamic equilibrium
• Love your artists and soldiers equally
• Manage the transfer, not the technology: be a gardener, not a Moses
• Appoint, and train, project champions to bridge the divide
3. Spread a system mindset
• Keep asking why the organization made the choices that it did
• Keep asking how the decision-making process can be improved
• Identify teams with outcome mindsets, and help them adopt system mindsets

This book began with examples of wildly innovative groups that suddenly stopped innovating. Those examples included Catmull’s description of a decline at Disney: “The drought that was beginning then [following The Lion King ] would last for the next sixteen years.… I felt an urgency to understand the hidden factors that were behind it.”

The Bush-Vail rules above describe how you can prevent the decline and stagnation that follows a phase transition, when good teams begin killing great ideas. But we have not yet gotten to the what and the why : what are those hidden forces and why do they appear? In other words, what causes that transition?

So now let’s turn to the what and the why. Understanding those forces will reveal a fourth category of Bush-Vail rules.

We’ll begin with a legendary detective and an equally famous political philosopher.

Both specialized in hidden forces.




While the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty.
—Sherlock Holmes, in The Sign of the Four



The Importance of Being Emergent

Why should you believe any rule or generalization about teams or companies or any group of people? All people are different. All teams are different.

Yet some rules that describe what happens when many people come together to accomplish tasks seem to work pretty well. The rules of efficient markets, invisible hands, and so on. Those have been established and tested beyond any doubt, right?

Well, sort of. This is Alan Greenspan, the economist who chaired the US Federal Reserve for nineteen years, writing in the Financial Times in 2011:

Today’s competitive markets, whether we seek to recognise it or not, are driven by an international version of Adam Smith’s “invisible hand” that is unredeemably opaque. With notably rare exceptions (2008, for example), the global “invisible hand” has created relatively stable exchange rates, interest rates, prices, and wage rates.

Here’s the problem: analyzing markets except for the “notably rare exceptions” of bubbles and crashes is like analyzing the weather except for storms and droughts. We really do want to understand storms and droughts. We’d like to know if we will need an umbrella.

Not all economists, to be fair, agreed with Greenspan. One extended his logic to the analysis of diplomacy: “With notably rare exceptions, Germany remained largely at peace with its neighbors during the 20th century.”

Greenspan’s view, however, that efficient markets and invisible hands are fundamental laws that are rarely, if ever, violated is widespread. But it’s a fallacy. That fallacy is a common cause of policy disasters (or investment opportunities, if you are a trader).

Neither efficient markets nor invisible hands are fundamental laws. They are both emergent properties .

Emergent properties are collective behaviors: dynamics of the whole that don’t depend on the details of the parts, the macro that rises above the micro. Molecules will flow at high temperatures and freeze at low temperatures regardless of the differences in their details. A water molecule has three atoms and is shaped like a triangle. Ammonia has four atoms and is shaped like a pyramid. Molecules of buckminsterfullerene have sixty atoms and are shaped like soccer balls (they’re called buckyballs for short). Yet they all exhibit the same fluid dynamics at high temperatures and solid dynamics at low temperatures.

One of the things that distinguishes an emergent property like the flow of liquids from a fundamental law—like quantum mechanics or gravity, for example—is that an emergent property can suddenly change. With a small shift in temperature, liquids suddenly change into solids. That sudden shift from one emergent behavior to another is exactly what we mean by a phase transition.

Although all people are different, and all teams are different, what makes emergent properties and the phase transitions between them so interesting is that they are so predictable. We will see why organizations will always transform above a certain size, just like water will always freeze below a certain temperature, traffic will always jam above a critical density of cars, and one burning tree in a forest will always explode into a wildfire in high winds. These are all examples of phase transitions.

Each person and team may be a puzzle. But in the aggregate, as Sherlock Holmes might say, the likelihood that any group will experience a phase transition becomes a mathematical certainty.

The terrific thing about the science of emergence is that once we understand a phase transition, we can begin to manage it. We can design stronger materials, build better highways, create safer forests—and engineer more innovative teams and companies.

So what does all of this tell us about Mr. Greenspan and the widespread belief in the almighty invisible hand? The confidence in the infallibility of the invisible hand is a consequence, to come back to our Newton-Jobs theme from the prior chapter, of false idolatry. For two hundred years we have been bowing down to the wrong seventeenth-century physicist.

To see what I mean, let’s travel back to a summer day in Britain two centuries ago.

On Sunday, July 11, 1790, as he lay dying in his home in Edinburgh, a revered Scottish philosopher, who would become famous for ideas he did not believe and a phrase he did not invent, sent for two friends. He begged them to burn his unpublished notes and manuscripts, except for one. The two had been resisting similar requests for months, hoping he would change his mind. That Sunday they yielded. They burned, in total, sixteen volumes. The scholar, relieved, joined his friends for supper. At half past nine, he rose to return to bed, announcing, “I love your company, gentlemen, but I believe I must leave you to go to another world.” Six days later he died.

Adam Smith, who knew how to make an exit, has grown into a misty icon, a hero to libertarians and free-marketeers who like their economics neat, hold the morals. (The real Adam Smith argued for restraints on markets and prized his work on ethics more than his work on economics.) The manuscript Smith asked his friends to spare had nothing to do with either ethics or economics. It was his History of Astronomy , written shortly after he finished graduate school.

In the History , Smith states that the task of the philosopher is to explain “the connecting principles of nature … the invisible chains which bind together” disjointed observations. Smith analyzes competing theories of planetary motion and ends with a deep bow to Newton, whose theory of gravity he describes as “the greatest discovery that ever was made by man.” (Newton worship was all the rage at the time. For a taste, see the wonderfully titled Sir Isaac Newton’s Philosophy Explain’d for the Use of the Ladies. )

The complete works of Isaac Newton (abridged), 1739

The idea of an underlying force that can explain complex behaviors, as gravity explained the motion of planets and tides, fascinated Smith. His Theory of Moral Sentiments (1759) proposes an underlying force that explains how humans behave. His Wealth of Nations (1776) proposes an underlying force that explains how markets behave.

Smith didn’t intend to call that underlying force in markets an invisible hand . He used the phrase only three times, across all his writings, ambiguously and inconsistently. (The first time, he used it as a sarcastic put-down of superstitious beliefs, a “mildly ironic joke.”) The hand metaphor had been used by many writers, and it was ignored, in the context of financial markets, for 170 years after Smith died, until an economics textbook in the 1950s revived the phrase, imbued it with its current meaning, and attributed that meaning retroactively to Smith.

Whatever its origin, the current meaning has become widely accepted as fact: individuals acting purely out of self-interest can create complex market behaviors. Prices will adjust to demand, resources will be allocated efficiently, and so on. Shopkeepers sell, people shop, and these collective behaviors just … emerge. The same behaviors appear whether butchers sell chicken or beef, whether bakers sell cupcakes or bread. They are dynamics of the whole that don’t depend on details of the parts.

That should sound familiar. Liquids will flow the same way whether they are made of water or ammonia. The collective behavior of markets is an emergent property like the flow of liquids, not a fundamental law like gravity. For two centuries economists have aspired to Newton-style fundamental laws (the “Gravity Model” of international trade, a “Quantum Theory Model of Economics,” “Conservation Laws” of economics—all by Nobel laureates). These economists have been inspired by Newton, who grew into the high priest of one branch of physics; call it Physics Catholicism. That branch of physics preaches a dogmatic belief in fundamental laws and the glamorous search to discover them.

Adam Smith’s work, however, was much closer to the field’s quieter, and less well-known, Protestant offshoot: the study of emergent phenomena. The high priest of that offshoot was Newton’s widely admired contemporary, Robert Boyle.

The battles between the descendants of these two branches continue to this day. One side believes the highest priority should be to search for fundamental laws and writes lines like: “We are living through a landmark period in human history in which the search for the ultimate laws of the universe will finally draw to a close.”

The other side believes that there may be no such fundamental laws. The laws of nature may be like an infinite skyscraper, with different and fascinating rules at each level, rules that are gradually revealed as you descend the stairway down to smaller and smaller distance scales. The current high priest of this branch writes lines like: “The existence of these [emergent] principles is so obvious that it is a cliché not discussed in polite company,” and of their deniers, “The safety that comes from acknowledging only the facts one likes is fundamentally incompatible with science. Sooner or later it must be swept away by the forces of history.” (The author of those latter two quotes, Bob Laughlin, is a Nobel laureate and disciple of Phil Anderson, mentioned earlier. He required all his students, of which I was one, to read Anderson’s “More Is Different” essay.)

It may seem like the distinction between dueling branches of physics should be of little interest to a nonspecialist, just like the distinction between dueling branches of religion may be of little interest to an atheist. But the distinction can matter, a lot. Perfectly efficient markets—a Newton-style, fundamental belief—don’t have bubbles and crashes. Boyle-style emergent markets, on the other hand, with certain reasonable assumptions, almost always do.



Widely dismissed or ridiculed idea


A 12-year-old patient with diabetes is treated with ground-up pancreas extract



An 80-pound payload is accelerated to 500 miles per hour through rocket propulsion

Long-range ballistic missiles


A 32-year-old former milk-truck driver plays a metrosexual British spy who saves the world

James Bond


A script titled The Adventures of Luke Starkiller is green-lit

Star Wars

Phase transition

Sudden transformation in system behavior, as one or more control parameters cross a critical threshold


From liquid to solid, as temperature decreases

Cars on highways

From smooth flow to jammed flow, as car density increases

Fires in forests

From contained to uncontrolled, as wind speed increases

Individuals in companies

From a focus on loonshots to a focus on careers, as size of company increases

Which brings us to the importance of being emergent—or, at least, understanding emergence. It can help us capture the benefits of diversity while reducing the risk of collective disasters. We want to benefit from the wisdom of crowds while reducing the risk of market crashes. We want to benefit from a plurality of beliefs while reducing the risk of religious wars.

Over the coming chapters, we will apply Boyle-style science to help us understand the collective behaviors of individuals in companies, much as Smith applied Boyle-style science (not Newton-style) to help us understand the collective behaviors of individuals in markets.

Understanding these behaviors will help us learn what we really want to know: how to capture the benefits that large groups bring to big goals—winning wars, curing diseases, transforming industries—while reducing the risks that those groups will crush valuable and fragile loonshots.

To see how this works, let’s start with a drive along the highway.



Phase Transitions, I: Marriage, Forest Fires, and Terrorists

When gradual shifts cause sudden transformations

You’re driving home from work, on the highway, you’re anxious, maybe speeding a bit, but the traffic is flowing well. Suddenly, the highway turns into a parking lot of stopped cars. There’s no visible cause. There are no on-ramps or accidents in sight. You set aside thoughts of a cold dinner and angry spouse and wonder: where did this traffic jam come from?

Answer: You have just experienced a phase transition—a sudden change between two emergent behaviors. Those two behaviors are smooth flow and jammed flow.

Here’s how to think about it: Imagine the highway is nearly empty. The driver of the car ahead of you, hundreds of yards away, briefly taps then releases his brakes—maybe he’s seen a squirrel. You see the red brake lights briefly flash, but since the car is so far away, there is no need to slow down.

On a crowded highway, the same driver is only a few car lengths ahead. As soon as he touches his brakes, you slam on yours. The brake lights in front of you may have only flashed for two seconds. But once the driver ahead of you releases his brakes, and you release yours, it takes you longer than two seconds to accelerate back to cruising speed. It may take you four seconds. The delay grows for the driver behind you. It may take him eight seconds to get back to the original speed. For the driver behind him, 16 seconds. The one small tap grows exponentially, until it becomes a traffic jam.

In the early 1990s, a pair of physicists showed that below a critical density of cars on the highway, traffic flow is stable. Small disruptions—drivers tapping their brakes when squirrels run by—have no effect. Traffic engineers call that a smooth flow state. But above that threshold, traffic flow suddenly becomes unstable. Small disruptions grow exponentially. That’s a jammed flow state. The sudden change between smooth and jammed flow is a phase transition.

As rush hour nears, the density of cars is right on the verge of that critical threshold. A few extra cars on some stretch of the highway—a pileup, for example, behind a slow-moving truck—will push traffic flow over the edge.

The stalls that mysteriously appear with no apparent cause are called phantom jams. They have been confirmed not only by careful observation on highways but by experiment. In 2013, a group of researchers in Japan tracked cars circling inside the Nagoya Dome, an indoor baseball field. They found, as predicted, that when the density of cars exceeded a critical threshold, spontaneous jams suddenly appeared.

Over the past two decades, traffic flow researchers have introduced many variations on the basic model introduced in the 1990s: more vs. less aggressive drivers, faster vs. slower reaction times, a mix of big cars (trucks) and small cars, and so on. In all cases, they find the same phase transition. When the density exceeds a critical threshold, the system will flip from the smooth-flow to the jammed-flow state.

Testing the traffic flow phase transition at the Nagoya Dome in Japan

Phase transitions are everywhere.

To understand what phase transitions tell us about nurturing loonshots more effectively, we need to know just two things about them:

1. At the heart of every phase transition is a tug-of-war between two competing forces.
2. Phase transitions are triggered when small shifts in system properties—for example, density or temperature—cause the balance between those two forces to change.

That’s it.

To illustrate these two ideas, we’ll set aside for a moment traffic flow, which can be complicated, and start with something much simpler: marriage.


It is a truth universally acknowledged that a single man in possession of a good fortune must be in want of a wife.
—Jane Austen, Pride and Prejudice

Miss Austen suggests that two competing forces tug at single men. Those of modest fortunes, in their younger and more aggressive years, may travel widely in the pursuit of fame, wealth, and glory. Let’s call that force “entropy.”

Those of greater fortune, in their older and gentler years, want to settle down with a partner. They seek family, stability, and cable TV. Let’s call that force “binding energy.”

The physicist Richard Feynman once said, “Learn by trying to understand simple things in terms of other ideas—always honestly and directly.” His disciple Lenny Susskind, my former graduate advisor, took that advice seriously. Lenny once explained to me a complex idea in topology, the study of surfaces, by saying, “Imagine an elephant, then take its trunk and shove it up its a—. That’s your surface.”

In that vivid spirit, imagine the bottom half of a very large egg carton. To be specific, imagine a square carton, 20 by 20 egg wells, so there are 400 wells in total. Let’s seal our carton inside a glass protective cover, so we can peer inside and inspect the wells. Rather than imagine eggs, since we will be doing a lot of jiggling, and that could get messy, let’s visualize Jane Austen’s men as small marbles resting in those egg wells. Those marbles have settled down. They are happily married, raising kids.

Now imagine gently shaking the egg carton back and forth. The marbles rock within their small egg wells. But they stay put. Now gradually increase the vigor of your shaking. The marbles reach higher and higher up the sides of their wells. But still they stay put. Finally, when the vigor of your shaking crosses a certain critical threshold, so that a marble reaches the top of its well—all hell breaks loose. Marbles leave their wells and end up in their neighbor’s well; they quickly leave that one and go on to the next; they bounce into and off of other marbles; they travel everywhere, all over the place. Rather than rest quietly in an ordered pattern, the marbles randomly ricochet around the carton, creating a scattered, disordered sea of marbles.

Welcome to a Manhattan singles bar.

In physics language, we triggered a marble-solid to marble-liquid phase transition.

The marble-solid to marble-liquid phase transition: when shaking energy crosses a threshold, marbles suddenly break free

The system property that we gradually change to trigger a phase transition is called a control parameter. In the traffic flow example, the density of cars on the highway is the control parameter. In this marble-solid to marble-liquid transition, the vigor of our shaking is the control parameter. Shaking vigor can be measured on a scale. We can call that scale “temperature.” The hotter the temperature, the more the entropy term dominates (the urge to roam all over). The colder the temperature, the more the binding energy dominates (the attraction to the bottom of the egg well). When the temperature crosses a threshold—a breakeven point between entropy and binding energy—the system suddenly changes behavior. That’s a phase transition.

In real solids, the binding energy arises from the forces between molecules rather than a fixed landscape of small wells. But otherwise the model gets it right. This microscopic tug-of-war between entropy and binding energy is behind every liquid-to-solid phase transition.

In the next chapter, I will show you that team size plays the same role in organizations that temperature does for liquids and solids. As team size crosses a “magic number,” the balance of incentives shifts from encouraging a focus on loonshots to a focus on careers.

The magic number is not universal, however. Teams transform at different sizes, just like solids melt at different temperatures. The reason is the key idea behind our fourth rule. It’s why we can change the magic number. Systems have more than one control parameter.

In our egg carton example, imagine making the egg wells a hundred times deeper. You need to shake a hundred times harder to knock the marbles out of their wells. The deeper well is how we can think about a solid with a stronger binding energy. For example, the binding energy in iron is nearly a hundred times stronger than the binding energy in water. That’s why iron melts at close to 2,800 degrees Fahrenheit, while ice melts at 32 degrees Fahrenheit. Binding energy is another control parameter.

Identifying those other control parameters is the key to changing when systems will snap: when solids will melt, when traffic will jam, or when teams will begin rejecting loonshots.


Let’s come back to our traffic flow example, and to a useful technique that scientists use to help think about these questions.

The two competing forces for drivers on a highway are speed and safety. A driver accelerates to reach cruising speed, but he brakes to avoid hitting the bumper in front of him. The spacing between cars—average car density—is one control parameter, as we saw above. But it’s not the only one. Your decision whether to slam on your brakes when you see brake lights flash on the car ahead depends not only on the distance to that car but on how fast you’re going. At 30 miles per hour, your stopping distance is roughly six car lengths. At 80 miles per hour, it’s closer to 30 car lengths. In deciding whether to brake, your brain intuitively estimates your stopping distance and compares it with the distance to the bumper in front of you. Both the average speed of cars and the average density of cars contribute to triggering the transition.

A phase diagram captures these two control parameters in one graphic. In the diagram below, the average distance between cars is measured on the vertical axis, and the average car speed is measured on the horizontal axis. At low speeds or when there are few cars on the highway, above and to the left of the dashed line (#1 in the diagram below), traffic flows smoothly. When the traffic flow crosses the transition line at either higher car densities (#2) or faster car speeds (#3), small disruptions grow exponentially into a jam. The dashed transition line slopes up and to the right because braking distances get longer as cars travel faster, meaning a larger average separation between cars is required to avoid a jam.

When the average separation between cars falls below the transition line (1 → 2) or the average speed rises above it (1 → 3), smooth flow suddenly jams

Traffic engineers use these ideas to design better highways. Reducing speed limits in heavy traffic may seem counterintuitive, but it reduces the likelihood that a small disruption will cause a jam (it can shift the flow from point #3 in the diagram above to #1). Some highways use ramp metering : when the density and speed start to approach the phase-transition dashed line in the figure above, on-ramp traffic signals can temporarily reduce the flow of new cars onto the highway. That backs the highway flow away from the dashed line. A policy of banning trucks from passing other trucks (called a truck-overtaking ban ) reduces the pileups behind trucks. Those pileups temporarily increase the density of cars and can push smooth traffic flow across the dashed line and into a jam. Studies on German autobahns have shown that truck-overtaking bans work. They improve the flow of passenger vehicles, although they slightly decrease the flow of trucks.

The science of phase transitions, as we can see from the traffic flow example, has expanded far beyond an academic curiosity. Identifying the control parameters of a transition helps us manage that transition. Which is exactly what we will do with teams and companies: identify what we can adjust to design organizations that are better at nurturing loonshots.

Some of the most creative ideas for adjusting control parameters, as we will see, come from the connections between systems that appear to be unrelated but turn out to share the same category of phase transition.

The solid-to-liquid transitions described above—both the marbles and real solids—fall within a category called symmetry-breaking transitions. A liquid has symmetry in the sense that, averaged over time, it looks the same from any angle. That’s called rotation symmetry. A solid does not: it “breaks” rotation symmetry. That’s because the view of a molecule looking directly down the x-axis will be very different than the view looking five or ten degrees off that axis. Over a dozen Nobel Prizes have been awarded for discoveries that were ultimately explained by this same principle of a symmetry-breaking transition.

The sudden change in traffic flow falls within a second category of phase transition called a dynamic instability . A change in control parameters transforms one kind of motion (smoothly flowing cars) into a different kind of motion (jammed flow) by making the smooth flow very sensitive to small disruptions (driver tapping on his brakes). Fluids and gases also experience dynamic instabilities. They flow smoothly, but only at speeds below a critical threshold. Above that threshold, the flow suddenly becomes turbulent.

Imagine, for example, a boat moving slowly down a river. Water parts smoothly at the front of the boat. In the back, as water speeds up to fill in the space left behind, the flow forms a big, messy, turbulent wake. Or picture smoke rising from a cigarette in still air. In the picture below, the smoke from Bogart’s cigarette breaks apart a few inches above the tip. Initially the smoke flows in a smooth column; as the smoke particles gather speed (the hot air coming off the tip of the cigarette accelerates upward), the column will suddenly break apart into a turbulent mess. Both the flow of water around a boat and cigarette smoke rising through the air are examples of the transition to turbulence. Because turbulence is closely connected to drag forces, understanding this kind of transition helps us design better ships, planes, and even golf balls. (Golf balls are dimpled because a little turbulence near a surface layer reduces drag—which is why, if you have a great swing, you can drive a modern dimpled golf ball over four hundred yards. Smooth golf balls would travel roughly half that distance.)

Humphrey Bogart demonstrates the transition from smooth flow to turbulent flow

In 1957, a pair of British mathematicians identified a new category of phase transition. It has helped us understand the spread of forest fires, predict the formation of oil deposits, and, most recently, anticipate, and possibly prevent, terror attacks by analyzing the online behavior of would-be terrorists. Thanks to the magic of emergent behavior—of “more is different”—we now have a terror-hunting tool that can be used without violating online privacy.

It all began with a gas-mask puzzle.


In 1954, a mathematician named John Hammersley presented an unusual paper at a meeting held at the offices of the Royal Statistical Society in London. He described new statistical techniques for evaluating the likelihood that certain patterns were due purely to chance.

Hammersley presented the example of Neolithic stone circles in western Scotland. The circles, built by Druids over three thousand years ago, measured from nine to one hundred feet in diameter. An engineer named Alexander Thom had studied the circles and claimed that each of them had been built to a multiple of a certain unit length. Professional archaeologists scoffed. One audience member described it as a controversy over whether we should think of Neolithic man as a savage or a colleague. But Hammersley’s statistical methods supported Thom’s claims. The Druids were more sophisticated than anyone had previously believed. They were indeed colleagues.

In the audience that day, Simon Broadbent, a 26-year-old engineer who published poetry on the side, was intrigued. Broadbent worked for the British Coal Utilisation Association analyzing coal production. He’d been asked to look into how to design better gas masks for coal miners. Gas masks use materials filled with pores small enough and sticky enough to trap dangerous particles as air passes through. The pores in those materials are of random sizes and randomly distributed. For a gas mask to work, those random pores must create at least one connected channel that allows air to flow all the way through the mask—from one side to the other, uninterrupted, so that a miner can breathe.

During the discussion session after the paper, Broadbent asked Hammersley if his techniques for analyzing randomness in data could predict which materials with random pores would contain at least one connected channel. In other words, if told the type of material, could Hammersley predict whether a coal miner wearing a mask made of that material would suffocate?

Hammersley soon realized that no one had ever posed, or at least answered, a statistical problem of that sort. The two began collaborating. Hammersley, 34, was a kind of odd-jobs statistician at Oxford (the field had not yet developed into a major discipline). His job was to tackle whatever problems the university administration or faculty suggested. One year he was asked to teach a course in the Department of Forestry on how researchers should collect and analyze data on tree growth. It wasn’t long before Hammersley realized that Broadbent’s question applied much more broadly than just to gas-mask design. It applied to forests as well.

Imagine a forest as a random distribution of trees. Now suppose a fire is started on one side of the forest. Assume fire can spread to a neighboring tree only if that tree is close enough for a spark to jump across. Will the fire spread from one edge of the forest to the other?

Broadbent and Hammersley discovered that the answer to both the gas-mask puzzle and the forest-fire puzzle was described by a phase transition. Below a threshold density of pores in a gas mask, no air could get through. Above that critical density, a channel would always appear connecting one side to the other. For forests, below a threshold density of trees, the fire would die out. Above that critical density, the fire would engulf the whole forest.

But tree density is not the only control parameter. Just as with cars on a highway, the forest-fire transition has more than one. Suppose wind is blowing strongly. Realistically, sparks could spread farther than just one tree. At high wind speeds, therefore, the contagion threshold should take place at a lower density of trees. In other words, the dashed transition line in the phase diagram above should slope downward to the right.

When the density of trees exceeds the contagion threshold (1 → 2) or wind speed crosses the same threshold (1 → 3), small fires will erupt into wildfires

Air finding a channel through pores in a mask or fire finding a path through trees in a forest reminded Hammersley of water percolating through coffee grounds. If the grounds are packed too tightly, water may not find a path through. When they are loose enough: drip, drip. So Hammersley called his techniques and ideas “percolation theory.”

Like symmetry-breaking, percolation theory turns out to connect a staggering range of seemingly unrelated systems.

When do rocks break? Rocks accumulate a random collection of stresses and fractures over time. When those small fractures coalesce into one large fracture that travels from one edge of a rock to the other, the rock breaks in two. That’s the percolation threshold.

When should you drill for oil? Fissures deep in the ground form randomly, like pores in the gas mask. Below the percolation threshold for those fissures, your drill will likely hit a small, disconnected cluster of trapped oil. Bad investment. Above the percolation threshold, your drill is likely to pierce one giant, connected reservoir of oil. Good investment.

When will a small disease outbreak grow into an epidemic? Go back to the model of fire spreading from tree to tree. A high wind speed in the forest, blowing sparks quickly from tree to tree, is like a virus that is highly contagious. A high density of trees is like people living close together (in cities, for example). When the infectability and density cross a critical threshold, small outbreaks erupt into epidemics. When they fall below that threshold, small outbreaks die out quickly. That’s the epidemic phase transition.

So how did real fire researchers react to these new mathematical models?

Not very well. It took a long time for firefighters to, um, warm up to statistical physicists and for the ideas to catch. Here’s a story from a widely used fire-management textbook:

The old-timer among firefighters [often] fails to realize how much can remain unknown to the man who has never had the opportunity to observe similar events personally.
To demonstrate this lack of knowledge due to lack of experience, a middle-aged man tells a little story about himself of an event that happened more than 20 years ago. That day, the young fellow easily reached an observation point near the fire before an older ranger arrived. There before him was a rolling inferno of flames such as he had never before seen. Fascinated and frightened, he told himself that all the power of Man could never stop this fire.
The old ranger wheezed up, rolled himself a cigarette and mumbled to himself, “The head will run into that old burn in half an hour and by sundown the wind will die and we’ll cold trail her.” Then he turned slowly to a messenger and said, “Joe, go phone headquarters and tell them the fire is under control.”

These are not the kind of guys who will go gaga for differential equations.


In the 1990s, a handful of research groups finally succeeded in igniting interest in practical uses of percolation. For decades, forestry agencies had been using fire simulation models that captured the micro : the combustion properties of silvertop ash vs. ponderosa pine, the rate of fire spread as a function of slope steepness, and so on. Those models are helpful for predicting hour-by-hour local behavior of a fire. Would it head left or right, speed up or slow down? But those models can’t help with global patterns, the macro : the frequency, for example, of large fires.

To capture the interest of guys like the young man and old ranger in the story above, a research group composed of geologists, landscape ecologists, and physicists, found a middle ground between micro and macro. How they did that is key to what we will do in the next chapter with teams and companies.

Early forest-fire models didn’t interest experienced firefighters because they were too macro, too simplistic. For example, they assumed that trees regrow everywhere in a forest at equal rates. They don’t. Burned areas take decades to recover. The models also assumed that burning trees always ignite their neighbors. But in real forests, many things affect the spread of fires: air moisture, ground moisture, tree species, slope of the land. A fire will spread twice as fast, for example, on a 30 percent upward slope. Small burns nearly always spin out of control when humidity falls below 25 percent. But recording all those micro details, for every forest, would make predicting macro patterns impossible.

The researchers found a middle ground by creating a model that was simple, but not simplistic. Throw away too much detail, and you explain nothing. Retain all the detail—same thing. Do we need to know the difference between the combustion properties of silvertop ash and ponderosa pine to tease out general principles for designing safer forests? No. Will we need to sift through 137 case examples and dozens of theories to tease out general principles for designing more innovative teams and companies? No. We want a model that is just simple enough so that we can extract macro insights with confidence in their micro origins.

In other words, we want a model that describes the forest but is built from the trees.

To understand macro patterns of forest fires, it turns out, you need just two key parameters. I labeled the horizontal axis in the forest fire phase diagram a few pages before “wind speed.” But a better term, which captures what really matters for the spread of fires, might be “virality.” High wind speeds, dry ground, and low humidity increase virality: they make fires more likely to spread. Low wind speeds, moist ground, and high humidity decrease virality: they make fires less likely to spread.

In 1988, a fire in Yellowstone National Park burned 800,000 acres, 36 percent of the total park area—the largest fire in the park’s history. Analyzing park policy is where percolation theory first showed what it can do. Until 1972, Yellowstone policy required rangers to put out every small fire immediately, whether it was caused by humans (a carelessly tossed cigarette) or by nature (a lightning strike). The frequency of small fires in a forest is sometimes called the sparking rate . The park managers’ policy of reducing the sparking rate, although well intentioned, had allowed the forest to grow dense with old trees. They had inadvertently pushed the forest across the dashed line in the diagram above. Their policy had made contagion—a massive outbreak like the 1988 fire—inevitable.

Today most forestry services recognize the “Yellowstone effect” of artificially low sparking rates. They allow small- or medium-sized fires to burn under watch, called a controlled-burn policy. In some cases, if the forest is getting too close to the contagion threshold (the dashed line in the phase diagram), fire managers will initiate small burns, called prescribed burns, to back the forest away from the threshold.

The idea of a controlled burn seems sensible today, almost intuitive. Percolation models helped spread that intuition by grounding the idea in science. But the most interesting success of those models—which led to a completely unexpected spin-off—came from comparing their predictions with historical records on the frequency of fires of different sizes.

The percolation models predict something you would never guess through intuition, or experience, or microsimulations with different tree types and vegetation. It is a unique prediction of the science of emergence and of phase transitions. According to these models, as a forest gets dangerously close to a phase transition, to erupting, the frequency of fires should take a specific form. The frequency should vary in inverse proportion to size: Twenty-acre fires should occur half as often as ten-acre fires. Forty-acre fires should occur one-quarter as often as ten-acre fires. Hundred-acre fires should occur one-tenth as often, and so on. That pattern, called a power law, is a surprising prediction—a mathematical clue that a forest is on the verge of erupting.

The pattern has been seen elsewhere. As we will discuss below, the power-law pattern is seen not only in forest-fire models, but in financial markets and terrorist attacks.

It would take another decade, however, for these three seemingly unrelated systems to come together. Outside of the forest-fire world, interest in Hammersley and Broadbent’s percolation theory began to dwindle. Mathematicians explored variations on the puzzle: placing trees on the nodes of a square network (four neighbors per node), a hexagonal network (like the pattern on a soccer ball; three neighbors per node), cubic networks in 19 dimensions (38 neighbors), then trying to figure out at what density of trees a fire would erupt. After dozens of such variations had been analyzed and the big questions mostly answered, the theory gradually drifted into distinguished old age. It played quiet games of checkers with other elderly theories, rarely visited by the young.

The surprising rebirth of percolation theory began in January 1996. Four decades after Simon Broadbent asked John Hammersley an odd question about gas masks, a young Australian named Duncan Watts asked a math professor named Steven Strogatz an odd question about crickets.


In the mid-nineties, Watts, a 24-year-old, six-foot-two-inch graduate of the Australian Defence Force Academy and part-time rock-climbing instructor, was a restless graduate student studying mathematics at Cornell University, growing bored of standard graduate school fare. He had been searching for a suitable thesis advisor when he came across Strogatz, 36, who had recently joined Cornell’s applied math faculty. Strogatz specialized in quirky applications of advanced mathematical techniques (he once wrote a paper on the mathematics of Romeo and Juliet ). At the time, Strogatz was working on understanding synchrony in nature: How do millions of heart cells beat in rhythm? How do thousands of fireflies flash at the same time? Watts was intrigued, and signed on as his student. After casting about for a problem to work on together, they settled on an insect puzzle: How do giant fields of crickets synchronize their chirping?

Watts and Strogatz began by collecting crickets and placing them in tiny individual soundproof boxes in a lab, each with built-in microphones and speakers. The idea was to play sounds of other crickets through the speakers. Adjusting who heard whom could test theories of synchronization.

How do crickets harmonize?

As Watts scrambled around the campus orchards collecting crickets, he wondered how connections formed between crickets in the wild, outside of his miniature cricket recording studio. Did crickets listen to their nearest neighbors? Did they listen to all neighbors closer than some distance? Was there a lead cricket conductor?

A Broadway play, Six Degrees of Separation , had recently popularized the idea that everyone was only a few friendships away from everyone else in society. Three college kids started a game called “Six Degrees of Kevin Bacon,” ranking movie actors based on the same idea: one degree if you had been in a movie with Bacon, two degrees if you had been in a movie with someone who had been in a movie with Bacon, and so on. An astonishing 1.9 million actors are linked to Bacon by three degrees or less. What would “Six Degrees of Kevin Cricket” show?

As with Simon Broadbent’s question about gas masks, Watts’s question about crickets opened the door to a much bigger question. All sorts of networks had been explored for the percolation problem, as mentioned earlier. Square networks, hexagonal networks, networks in higher dimensions. But what about a social network? Where friends (crickets or humans or otherwise) could friend others, far removed?

The earlier percolation models made sense for studying the spread of fire, or an infectious disease, between objects that don’t move, like trees in a forest. But crickets, quite famously, jump around. As do humans. You don’t stay at home interacting only with neighbors living immediately to the left, right, front, and back of you. Over the course of a day, you might stop to chat with other parents as you drop off your kids at school. In the office, you might gossip about news or sports with colleagues at desks near yours or at the water cooler. At the grocery store, coming home from work, you might run into some friends and stop to catch up. And occasionally, during the day, or maybe a few times a week, you might reach out to connect with a friend across the country. That friend travels in a very different daily circle than yours.

The pattern of many connections within one tight community, punctuated by occasional ties to distant communities, describes a vast range of systems. Neurons in the brain mostly connect within one cluster, but occasionally their axons extend far outside, to an entirely different cluster. Proteins in a cell mostly interact within one functional group, but occasionally they connect with receptors far removed. Sites on the internet mostly connect within one tight group (celebrity news sites link to other celebrity news sites; biology sites link to other biology sites), but occasionally a site will connect far outside its cluster (TMZ will link to a study on neuroscience). The Kevin Bacon game had shown that there are surprisingly few steps between any two nodes (actors) in these kinds of networks. So Watts and Strogatz called a system with mostly local connections but occasional distant ties a “small-world network.”

Coming back to the crickets, Watts wondered whether percolation had ever been studied on a small-world network. He assumed a question that basic must have been solved already, so he went to the library to look up the answer. No one had asked it. He asked Strogatz, who knew that no one had studied the question, and realized they were onto something bigger than insect musicology.

Whether a computer virus spreads widely across the internet or disappears quickly; whether a tiny neuronal misfiring is harmless or erupts into a seizure engulfing your brain; whether an idea spreads explosively throughout a population or fades away quickly—all are governed by similar dynamics: percolation on a small-world network.

Watts and Strogatz’s paper was published in June 1998. As of mid-2018, it has been cited 16,505 times. Of the 1.8 million papers published in scientific journals on the topic of networks, their small-world paper ranks #1. It has been cited more than Einstein’s papers on relativity, Dirac’s paper on the positron, or any paper in history published on “fundamental” physics.

Earlier we heard Sherlock Holmes present the axiom of emergence: while individuals remain puzzles, man in the aggregate “becomes a mathematical certainty.” Holmes was in pursuit of a burglar in that scene from The Sign of the Four , calculating the odds, explaining to Dr. Watson his theory of the criminal class.

A century after Arthur Conan Doyle wrote those words, a physicist from Oxford University began pursuing terrorists. He applied the principle of percolating clusters of fires in forests, poised to erupt, to percolating clusters of small-world networks, poised to erupt.

His strategy for tracking terrorists was based on a mathematical certainty.


You can only eat so many falafels in a week. So in the late 1980s, when he was a graduate student studying physics at Harvard, Neil Johnson would occasionally abandon the falafel truck on the street outside the Jefferson physics lab and eat lunch at the law school cafeteria next door. There he met Elvira Restrepo, a law student from Colombia. They married soon after and lived briefly in Bogotá, until Johnson was appointed a professor at Oxford in 1992 and they settled in England.

Johnson’s work on guerrilla warfare and terrorism was inspired by a strange observation. “We would drop in to Colombia to visit family,” Johnson told me, “and the news [from Colombia’s decades-long civil war] would be something like: Three dead tonight. Eight dead tonight. Two dead tonight.”

Johnson is a fair-haired Brit with an eager laugh and a populist’s light touch with science. Imagine a young Tony Blair (when he was popular) explaining calculus, and you have Neil Johnson. But when he described the newscasts, the laugh disappeared. The reports brought back memories. “I grew up in London where it was: Here’s the news from Northern Ireland. Two dead tonight. None dead tonight. Four dead tonight.”

During his time at Oxford, Johnson had specialized in using the techniques of physics to find hidden patterns in what seemed liked random numbers. So when the second Iraq war began, in 2003, and daily death tolls once again made headlines, Johnson began to wonder: was there a pattern to those tragic daily numbers?

Johnson gained access to detailed data on casualties from Colombia’s ongoing civil war. The casualties, he discovered, followed a pattern seen, but never explained, in stock markets.

Textbooks on the behavior of stock markets often begin, like the Bible, with a declaration of faith. In the beginning, there were efficient markets. Markets capture all information into their prices; deviations from efficient prices are random (often called “random walks”). Bad actors can spoil the show (insider trading; manipulation), but with good behavior and proper enforcement, markets will revert to pure, perfectly efficient form. Much of modern finance theory, including estimates of risk and the pricing of stock options, is based on this belief.

Real markets, however, don’t seem to work this way. Price movements that should happen once a year instead happen daily. Stock exchanges in New York, London, Paris, and Tokyo all show the same pattern. The curve that measures the frequency of price movements is supposed to have a minuscule tail, which accounts for rare outliers. In the real world, those tails are not minuscule. When extreme outcomes happen much more frequently than you expect, the probability distribution develops what statisticians call a “fat tail.”

Physicists love fat tails. Random systems with no hidden connections, like coin tosses, have thin tails. They’re kind of boring. Fat tails signal interesting dynamics in a network. That might be a network of trees through which a fire spreads. Or it might be a network of people trading stocks, through which an idea spreads—in other words, a financial market. Physicists, including Johnson, had been studying the fat tails in financial markets for years, trying to make sense of them. Market crashes (Greenspan’s “notably rare exceptions”), hedge fund collapses, and sudden bank defaults are often caused by, or at least associated with, fat tails.

In 2003, Johnson coauthored a textbook on the physics of finance—applying techniques of statistical physics to markets. The book contained an offbeat suggestion. Most researchers tried to solve the fat tail problem by studying the behavior of individual traders. Johnson, instead, looked at clusters. He asked what would happen if we assumed traders acted in cliques: small groups whose members all behave the same way, that is, they make the same buy or sell decisions. (The evidence for groupthink in markets, from tulip mania to the internet bubble, is strong.) The clusters need not be permanent. Just like cliques in high school, members come and go, trading cliques form and dissolve, they merge with other cliques or split into two. Imagine bringing a pot of water to a boil. Just before the boiling point, bubbles of gas appear. Those bubbles grow or collapse, merge with other bubbles or fragment, all while new bubbles are forming. Johnson proposed that trading cliques act like those percolating bubbles.

By building a model that was simple, but not simplistic —that is, it captured the essence of trading, without getting lost in the details—Johnson showed that his trading cliques model seemed to explain the fat tail distribution in financial markets pretty well. That fat tail took on a characteristic shape: a power law. There were 32 times fewer cliques of 40 people than cliques of 10. There were 32 times fewer cliques of 160 than cliques of 40. And so on. The number of cliques decreased with the size of the clique by an unusual power: 2.5.

The data on casualties from decades of civil war in Colombia showed a near-perfect power law as well. There were 32 times fewer attacks with 40 casualties than attacks with 10 casualties. There were 32 times fewer attacks with 160 casualties than attacks with 40 casualties. The number of recorded attacks decreased with casualty size by the same unusual power: 2.5.

The similarity between trading data and one set of guerrilla warfare data, from just one country, could be a coincidence. But it would be a strange coincidence. Such a neatly ordered power law is rare. So Johnson and his collaborators began looking at other conflicts. Remarkably, data from wars in Iraq and Afghanistan showed the same pattern: casualties from attacks followed the same power-law form, with the same 2.5 exponent. Over the next three years, they recruited help and data from a broader set of researchers around the world, eventually assembling a database of 54,679 violent events across nine wars (or “insurgent conflicts”): Senegal, Peru, Sierra Leone, Indonesia, Israel, and Northern Ireland, in addition to their original three—Iraq, Colombia, and Afghanistan. The pattern persisted: a power law with an exponent of 2.5.

Just as Johnson and his group were compiling their data, another group of researchers, based in Santa Fe, New Mexico, reported on casualties from global terror attacks, using the largest database of terror attacks available, with records of over 28,445 events in more than 5,000 cities across 187 countries. The events spanned four decades, from 1968 to 2006. Whether analyzing deaths alone, or injuries plus deaths, the data showed a surprisingly strong statistical pattern: a power law with an exponent of roughly 2.5.

The common pattern was a clue, but not definitive evidence of percolating clusters: groups that form and dissolve, merge or fragment, in an endless cycle. There are many possible explanations of power laws (although very few that naturally come with an exponent of 2.5). Johnson needed stronger evidence.

In forests, you gather evidence by taking aerial pictures and tracking the progress of fires over time. Fire clusters form and burn out, merge or fragment. Aerial photography, however, can’t help you track cliques of people. And asking terrorists to fill out a questionnaire about their social habits (Please list any terror groups you joined or left recently!) did not seem like a winning research strategy. Johnson and his team were stuck with an intriguing but inconclusive hint.

Until 2014, when ISIS emerged, and Johnson decided to look online.


Tracking interest in terror activity by individual users on social media—sympathetic posts or tweets in reaction to events, for example—has proven to be a poor predictor of future attacks. Johnson’s data, however, pointed to analyzing clusters rather than individuals. So Johnson looked for signs of online clustering.

Johnson and his team quickly discovered that ISIS-interested followers were forming ad hoc groups on VKontakte, the largest Russian social network. They came together by linking to a common virtual page (the equivalent of a Facebook fan page for a brand or a business). Facebook immediately shuts down pro-ISIS pages. The Russian site, however, which had 350 million users at the time, does not. Because the groups stay open to attract new followers, Johnson and his team could track pro-ISIS pages closely. Followers used their common page to post real-time battle updates, teach practical survival skills (how to evade drone attacks), request funds (for fighters who wanted to travel to Syria but couldn’t afford it), and, of course, recruit (“This is a call to all brothers!”).

Sample content from an online terror cell

These online groups—virtual terror cells—are not fixed hubs in the familiar sense of a bus station, for example, where people gather to take a bus. Everyone knows where the bus station is. It was there yesterday and it will be there tomorrow. A bus station doesn’t suddenly materialize, grow, dissolve, merge with another bus station, or split into two smaller bus stations.

The online terror cells, on the other hand, do all of the above. Just like cliques in high school. Or traders in financial markets.

Terror cells in the offline world are extraordinarily difficult to identify and track. Johnson and his team soon realized that the virtual terror cells, by contrast, are easy to track. Simple computer algorithms can detect and record when new users link into a virtual cell, when followers unlink and leave, when cells merge, when cells split up, when cells are hunted by online agents and rapidly dissolve, when those followers reassemble into new cells, and so on.

From the early emergence of ISIS in 2014 through the end of 2015, Johnson’s team collected minute-by-minute data on the online behavior of 108,086 individual followers linked to a total of 196 of these virtual terror cells. It may be the largest publicly available forensic data set ever assembled on terrorist behavior.

The figure below shows one snapshot of the network, where the individual followers are the smaller dots, and the pages they connect through, the virtual terror cells, are the larger dots.

Analyzing the data confirmed Johnson’s guess: the virtual terror cells behaved like percolating clusters. They grew, merged, split apart, or collapsed just like fires in a forest. In forests, the two control parameters are the density of trees and the likelihood that a fire will spread from tree to tree (“virality”), as shown in the forest fire phase diagram earlier in this chapter. Below the critical threshold, small fires die out. Above that threshold, they erupt into a wildfire.

Map of an online terror network

Johnson’s team identified similar control parameters for the virtual terror cells on the Russian website. The number of clusters was like the density of trees. The rate at which one follower linking into a node inspires another follower to link into a node—the “infectability” of the cause—was the equivalent of the rate at which fire hops from tree to tree, the “virality.”

Extrapolating from the model of fires in a forest, Johnson and his team could then predict when those control parameters would cross a critical threshold and the network would erupt. In other words, when an attack was imminent.

To test the theory, Johnson’s team analyzed not only data from terror attacks but also, working with national authorities and using the same techniques, data from online groups for civil protests in Latin America. They found that signatures of incipient attacks and mass protests appeared weeks in advance. The figure below shows one measure of how the terror network grows exponentially before erupting, a signature that could predict the timing of an attack to within days.

Applying these percolation-style models to virtual terror cells has opened the door not only to new methods of detection and prediction but to new strategies.

First, the results suggest that rather than having to closely monitor millions of individual online behaviors, focusing on the behavior of a small number of cells, which may number in the tens or hundreds, is a better use of time and resources.

Predicting when conflict will erupt by measuring the growth of online terror cells

Second, recently developed mathematical techniques can identify “superspreaders”: clusters with the greatest influence. (Those are not always the ones with the greatest number of links.) The small-world networks found everywhere, described in the Watts-Strogatz paper, have an intriguing feature. They are both unusually robust and unusually fragile. They are robust against random attacks or random failures. Which is why random server outages, for example, have little effect on internet traffic. But they are especially vulnerable to attacks against the nodes with the greatest influence, as has been seen with attacks on the internet. Identifying and shutting down the online superspreaders is one strategy for fighting the spread of terror networks.

A third strategy is to increase the fragmentation rate—the rate at which clusters dissolve. The goal is to back a terror network away from the contagion transition, just as prescribed burns back a forest away from its contagion transition. (The authors writing on these topics are reluctant to discuss specifics.) Many more such strategies are being developed. And the techniques are being extended beyond ISIS to school shootings, bombings by nationalist groups, and other forms of violent conflict.

In 2007, Johnson left his job at Oxford for a faculty position at the University of Miami. This year, he will leave Miami to join George Washington University in Washington, DC, in part, he said, to work more closely with the national agencies that have expressed interest in applying these online methods.

The techniques offer promise for twenty-first-century policing: protecting populations without violating privacy. “You don’t need to know anything about the individuals,” Johnson said, to detect the patterns in their collective online behavior.

That’s the magic of emergence.



Systems snap—liquids suddenly freeze, traffic suddenly jams, forests or terror networks suddenly erupt—when the tide turns in a microscopic battle. Two forces compete, and the victory flag changes sides.

The marble is drawn to the bottom of its well in the egg carton. But shaking the carton hard enough rocks the marble out of the well. That’s binding energy vs. entropy.

A driver wants to cruise fast. But the driver brakes to avoid hitting the bumper in front of him. That’s speed vs. safety.

Fires propagate from tree to tree, but they can exhaust their fuel, or rain can wet the trees. Violent causes can spread, but the ideas can get stale, or online agents can shut down virtual terror cells. Both are examples of increasing vs. decreasing virality.

Acting on a lone atom or individual, the forces cause only gradual change. But multiplied a thousandfold or millionfold, the change becomes the sudden snap of a system: a phase transition.

Now let’s see how to apply these ideas to the behavior of teams, companies, or any kind of group with a mission.