LightNovesOnl.com

The Most Powerful Idea in the World Part 4

The Most Powerful Idea in the World - LightNovelsOnl.com

You're reading novel online at LightNovelsOnl.com. Please use the follow button to get notifications about your favorite novels and its latest chapters so you can come back anytime and won't miss anything.

Those new pathways aren't invisible anymore. A cognitive scientist at Northwestern named Mark Jung-Beeman, and one at Drexel named John Kounios, have performed a series of experiments very nicely calibrated to measure heightened activity in portions of the brain when those "eureka" moments strike. In the experiments, subjects were asked to solve a series of puzzles and to report when they solved them by using a systematic strategy versus when the solution came to them by way of a sudden insight. By wiring those subjects up like Christmas trees, they discovered two critical things: First, when subjects reported solving a puzzle via a sudden flash of insight, an electroencephalograph, which picks up different frequencies of electrical activity, recorded that their brains burst out with the highest of its frequencies: the one that cycles thirty times each second, or 30Hz. This was expected,4 since this is the frequency band that earlier researchers had a.s.sociated with similar activities such as recognizing the definition of a word or the outline of a car. What wasn't expected was that the EEG picked up the burst of 30Hz activity three-tenths of a second before a correct "insightful" answer-and did nothing before a wrong one. Second, and even better, simultaneous with the burst of electricity, another machine, the newer-than-new fMRI (functional Magnetic Resonance Imaging) machine showed blood rus.h.i.+ng to several sections of the brain's right, "emotional" hemisphere, with the heaviest flow to the same spot-the anterior Superior Temporal Gyrus, or aSTG.

But the discovery that resonates most strongly with James Watt's flash of insight about separating the condensing chamber from the piston is this: Most "normal" brain activity serves to inhibit the blood flow to the aSTG. The more active the brain, the more inhibitory, probably for evolutionary reasons: early h.o.m.o sapiens who spent an inordinate amount of time daydreaming about new ways to start fire were, by definition, spending less time alert to danger, which would have given an overactive aSTG a distinctly negative reproductive value. The brain is evolutionarily hard-wired to do its best daydreaming only when it senses that it is safe to do so-when, in short, it is relaxed. In Kounios's words, "The relaxation phase is crucial.5 That's why so many insights happen during warm showers." Or during Sunday afternoon walks on Glasgow Green, when the idea of a separate condenser seems to have excited the aSTG in the skull of James Watt. Eureka indeed.

IN 1930, JOSEPH ROSSMAN, who had served for decades as an examiner in the U.S. Patent Office, polled more than seven hundred patentees, producing a remarkable picture of the mind of the inventor. Some of the results were predictable;6 the three biggest motivators were "love of inventing," "desire to improve," and "financial gain," the ranking for each of which was statistically identical, and each at least twice as important as those appearing down the list, such as "desire to achieve," "prestige," or "altruism" (and certainly not the old saw, "laziness," which was named roughly one-thirtieth as frequently as "financial gain"). A century after Rocket, the world of technology had changed immensely: electric power, automobiles, telephones. But the motivations of individual inventors were indistinguishable from those inaugurated by the Industrial Revolution.

Less predictably, Rossman's results demonstrated that the motivation to invent is not typically limited to one invention or industry. Though the most famous inventors are a.s.sociated in the popular imagination with a single invention-Watt and the separate condenser, Stephenson and Rocket-Watt was just as proud of the portable copying machine he invented in 1780 as he was of his steam engine; Stephenson was, in some circles, just as famous for the safety lamp he invented to prevent explosions in coal mines as for his locomotive. Inventors, in Rossman's words, are "recidivists."

In the same vein, Rossman's survey revealed that the greatest obstacle perceived by his patentee universe was not lack of knowledge, legal difficulties, lack of time, or even prejudice against the innovation under consideration. Overwhelmingly, the largest obstacle faced by early twentieth-century inventors (and, almost certainly, their ancestors in the eighteenth century) was "lack of capital."7 Inventors need investors.

Investors don't always need inventors. Rational investment decisions, as the English economist John Maynard Keynes demonstrated just a few years after Rossman completed his survey, are made by calculating the marginal efficiency of the investment, that is, how much more profit one can expect from putting money into one investment rather than another. When the internal rate of return-Keynes's term-for a given investment is higher than the rate that could be earned somewhere else, it is a smart one; when it is lower, it isn't.

Unfortunately, while any given invention can have a positive IRR, the decision to spend one's life inventing is overwhelmingly negative. Inventors typically forgo more than one-third of their lifetime earnings. Thus, the characteristic stubbornness of inventors throughout history turns out to be fundamentally irrational. Their optimism is by any measure far greater than that found in the general population, with the result that their decision making is, to be charitable, flawed, whether as a result of the cla.s.sic confirmation bias-the tendency to overvalue data that confirm one's original ideas-or the "sunk-cost" bias, which is another name for throwing good money after bad. Even after reliable colleagues urge them to quit, a third of inventors will continue to invest money, and more than half will continue to invest their time.8 A favorite explanation for the seeming contradiction is the work of the Czech emigre economist Joseph Schumpeter,* who drew a famous, though not perfectly clear, boundary between invention and innovation, with the former an economically irrelevant version of the latter. The heroes of Schumpeter's economic a.n.a.lysis were, in consequence, entrepreneurs, who "may9 be inventors just as they may be capitalists ... they are inventors not by nature of their function, but by coincidence...." To Schumpeter, invention preceded innovation-he characterized the process as embracing three stages: invention, commercialization, and imitation-but was otherwise insignificant. However, his concession that (a) the chances of successful commercialization were improved dramatically when the inventor was involved throughout the process, and (b) the imitation stage looks a lot like invention all over again, since all inventions are to some extent imitative, makes his dichotomy look a little like a chicken-and-egg paradox.

Another study, this one conducted in 1962,10 compared the results of psychometric tests given to inventors and noninventors (the former defined by behaviors such as application for or receipt of a patent) in similar professions, such as engineers, chemists, architects, psychologists, and science teachers. Some of the results were about what one might expect: inventors are significantly more thing-oriented than people-oriented, more detail-oriented than holistic. They are also likely to come from poorer families than noninventors in the same professions. No surprise there; the eighteenth-century Swiss mathematician Daniel Bernoulli,11 who coined the term "human capital," explained why innovation has always been a more attractive occupation to have-nots than to haves: not only do small successes seem larger, but they have considerably less to lose.

More interesting, the 1962 study also revealed that independent inventors scored far lower on general intelligence tests than did research scientists, architects, or even graduate students. There's less to this than meets the eye: The intelligence test that was given to the subjects subtracted wrong answers from right answers, and though the inventors consistently got as many answers correct as did the research scientists, they answered far more questions, thereby incurring a ton of deductions. While the study was too small a sample to prove that inventors fear wrong answers less than noninventors, it suggested just that. In the words of the study's authors, "The more inventive an independent inventor is,12 the more disposed he will be-and this indeed to a marked degree-to try anything that might work."

WATT'S FLASH OF INSIGHT, like those of Newcomen and Savery before him (and thousands more after), was the result of complicated neural activity, operating on a fund of tacit knowledge, in response to both a love of inventing and a love of financial gain. But what gave him the ability to recognize and test that insight was a trained apt.i.tude for mathematics.

The history of mechanical invention in Britain began in a distinctively British manner: with a first generation of craftsmen whose knowledge of machinery was exclusively practical and who were seldom if ever trained in the theory or science behind the levers, escapements, gears, and wheels that they manipulated. These men, however, were followed (not merely paralleled) by another generation of instrument makers, millwrights, and so on, who were.

Beginning in 1704, for example, John Harris, the Vicar of Icklesham in Suss.e.x, published, via subscription, the first volume of the Lexicon Technic.u.m, or an Universal Dictionary of Arts and Sciences, the prototype for Enlightenment dictionaries and encyclopedias. Unlike many of the encyclopedias that followed, Harris's work had a decidedly pragmatic bent, containing the most thorough, and most widely read, account of the air pump or Thomas Savery's steam engine. In 1713, a former surveyor and engineer named Henry Beighton, the "first scientific man to study the Newcomen engine,"13 replaced his friend John Tipper as the editor of a journal of calendars, recipes, and medicinal advice called The Ladies Diary. His decision to differentiate it from its compet.i.tors in a fairly crowded market by including mathematical games and recreations, riddles, and geographical puzzles made it an eighteenth-century version of Scientific American and, soon enough, Britain's first and most important mathematical journal. More important, it inaugurated an even more significant expansion of what might be called Britain's mathematically literate population.

Teaching more Britons the intricacies of mathematics would be a giant long-term a.s.set to building an inventive society. Even though uneducated craftsmen had been producing remarkable efficiencies using only rule of thumb-when the great Swiss mathematician Leonhard Euler applied14 his own considerable talents to calculating the best possible orientation and size for the sails on a Dutch windmill (a brutally complicated bit of engineering, what with the sail pivoting in one plane while rotating in another), he found that carpenters and millwrights had gotten to the same point by trial and error-it took them decades, sometimes centuries, to do so. Giving them the gift of mathematics to do the same work was functionally equivalent to choosing to travel by stagecoach rather than oxcart; you got to the same place, but you got there a lot faster.

Adding experimental rigor to mathematical sophistication accelerated things still more, from stagecoach to-perhaps-Rocket. The power of the two in combination, well doc.u.mented in the work of James Watt, was hugely powerful. But the archetype of mathematical invention in the eighteenth century was not Watt, but John Smeaton, by consensus the most brilliant engineer of his era-a bit like being the most talented painter in sixteenth-century Florence.

SMEATON, UNLIKE MOST OF his generation's innovators, came from a secure middle-cla.s.s family: his father was an attorney in Leeds, who invited his then sixteen-year-old son into the family firm in 1740. Luckily for the history of engineering, young John found the law less interesting than tinkering, and by 1748 he had moved to London and set up shop as a maker of scientific instruments; five years later, when James Watt arrived in the city seeking to be trained in exactly the same trade, Smeaton was a Fellow of the Royal Society, and had already built his first water mill.

In 1756, he was hired to rebuild the Eddystone Lighthouse, which had burned down the year before; the specification for the sixty-foot-tall structure* required that it be constructed on the Eddystone rocks off the Devons.h.i.+re coast between high and low tide, and so demanded the invention of a cement-hydraulic lime-that would set even if submerged in water.

The Eddystone Lighthouse was completed in October 1759. That same year, evidently lacking enough occupation to keep himself interested, Smeaton published a paper ent.i.tled An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills. The Enquiry, which was rewarded with the Royal Society's oldest and most prestigious prize-the Copley Medal for "outstanding research in any branch of science"-doc.u.mented Smeaton's nearly seven years' worth of research into the efficiency of different types of waterwheels, a subject that despite several millennia of practical experience with the technology was still largely a matter of anecdote or, worse, bad theory. In 1704, for example, a French scientist named Antoine Parent had calculated the theoretical benefits of a wheel operated by water flowing past its blades at the lowest point-an "undershot" wheel-against one in which the water fell into buckets just offset from the top of the "overshot" wheel-and got it wrong. Smeaton was a skilled mathematician, but the engineer in him knew that experimental comparison was the only way to answer the question, and, by the way, to demonstrate the best way to generate what was then producing nearly 70 percent of Britain's measured power. His method remains one of the most meticulous experiments of the entire eighteenth century.

Fig. 4: One of the best-designed experiments of the eighteenth century, Smeaton's waterwheel was able to measure the work produced by water flowing over, under, and past a mill. Science Museum / Science & Society Picture Library He constructed a model waterwheel twenty inches in diameter, running in a model "river" fed by a cistern four feet above the base of the wheel. He then ran a rope through a pulley fifteen feet above the model, with one end attached to the waterwheel's axle and the other to a weight. He was so extraordinarily careful to avoid error that he set the wheel in motion with a counterweight timed so that it would rotate at precisely the same velocity as the flow of water, thus avoiding splas.h.i.+ng as well as the confounding element of friction. With this model, Smeaton was able to measure the height to which a constant weight could be lifted by an overshot, an undershot, and even a "b.r.e.a.s.t.shot" wheel; and he measured more than just height. His published table of results15 recorded thirteen categories of data, including cistern height, "virtual head" (the distance water fell into buckets in an overshot wheel), weight of water, and maximum load. The resulting experiment16 not only disproved Parent's argument for the undershot wheel, but also showed that the overshot wheel was up to two times more "efficient" (though he never used the term in its modern sense).

Smeaton's gifts for engineering weren't, of course, applied only to improving waterpower; an abbreviated list of his achievements include the Calder navigational ca.n.a.l, the Perth Bridge, the Forth & Clyde ca.n.a.l (near the Carron ironworks of John Roebuck, for whom he worked as consultant, building boring mills and furnaces), and Aberdeen harbor. He made dramatic improvements in the original Newcomen design for the steam engine, and enough of a contribution to the Watt separate condenser engine that Watt & Boulton offered him the royalties on one of their installed engines as a thank-you.

But Smeaton's greatest contribution was methodological and, in a critical sense, social. His example showed a generation of other engineers17 how to approach a problem by systematically varying parameters through experimentation and so improve a technique, or a mechanism, even if they didn't fully grasp the underlying theory. He also explicitly linked the scientific perspective of Isaac Newton with his own world of engineering: "In comparing different experiments,18 as some fall short, and others exceed the maxim ... we may, according to the laws of reasoning by induction* conclude the maxim true." More significant than his writings, however, were his readers. Smeaton was, as much as Watt, a hero to the worker bees of the Industrial Revolution. When the first engineering society in the world first met, in 1771 London, Smeaton was sitting at the head of the table, and after his death in 1792, the Society of Civil Engineers-Smeaton's own term, by which he meant not the modern designer of public works, but engineering that was not military-renamed itself the Smeatonian Society. The widespread imitation of Smeaton's systematic technique and professional standards dramatically increased the population of Britons who were competent to evaluate one another's innovations.

The result, in Britain, was not so much a dramatic increase in the number of inventive insights; the example of Watt, and others, provided that. What Smeaton bequeathed to his nation was a process by which those inventions could be experimentally tested, and a large number of engineers who were competent to do so. Their ability to identify the best inventions, and reject the worst, might even have made creative innovation subject to the same forces that cause species to adapt over time: evolution by natural selection.

THE APPLICATION OF THE Darwinian model to everything from dating strategies to cultural history is sometimes dismissed as "secondary" or "pop" Darwinism, to distinguish it from the genuine article, and the habit has become promiscuous. However, this doesn't mean that the Darwinian model is useful only in biology; consider, for example, whether the same sort of circ.u.mstances-random variation with selection pressure-preserved the "fittest" of inventions as well.

As far back as the 1960s,19 the term "blind variation and selective retention" was being used to describe creative innovation without foresight, and advocates for the BVSR model remain so entranced by the potential for mapping creative behavior onto a Darwinian map that they refer to innovations as "ideational mutations."20 A more modest, and jargon-free, application of Darwinism simply argues that technological progress is proportional to population in the same way as evolutionary change: Unless a population is large enough, the evolutionary changes that occur are not progressive but random, the phenomenon known as genetic drift.

It is, needless to say, pretty difficult to identify "progressive change" over time for cognitive abilities like those exhibited by inventors. A brave attempt has been made by James Flynn, the intelligence researcher from New Zealand who first doc.u.mented, in 1984, what is now known as the Flynn Effect: the phenomenon that the current generation in dozens of different countries scores higher on general intelligence tests than previous generations. Not a little higher: a lot. The bottom 10 percent of today's families are somehow scoring at the same level as the top 10 percent did fifty years ago. The phenomenon is datable to the Industrial Revolution, which exposed an ever larger population to stimulation of their abilities to reason abstractly and concretely simultaneously. The "self-perpetuating feedback loops"21 (Flynn's term) resulted in the exercise, and therefore the growth, of potential abilities that mattered a lot more to mechanics and artisans than to farmers, or to hunter-gatherers.

Most investigations of the relations.h.i.+p between evolutionary theory and industrialization seem likely to be little more than an entertaining academic parlor game for centuries to come.* One area, however, recalls that the most important inspiration for the original theory of evolution by natural selection was Charles Darwin's observation of evolution by unnatural selection: the deliberate breeding of animals to reinforce desirable traits, most vividly in Darwin's recognition of the work of pigeon fanciers. Reversing the process, a number of economists have wondered whether it is possible to "breed" inventors: to create circ.u.mstances in which more inventive activity occurs (and, by inference, to discover whether those same circ.u.mstances obtained in eighteenth-century Britain).

This was one of the many areas that attracted the attention of the Austrian American economist Fritz Machlup, who, forty years ago, approached the question in a slightly different way: Is it possible to expand the inventive work force? Can labor be diverted into the business of invention? Can an educational or training system emphasize invention?

Machlup-who first popularized the idea of a "knowledge economy"-spent decades collecting data on innovation in everything from advertising to typewriter manufacture-by one estimate, on nearly a third of the entire U.S. economy-and concluded with suggesting the counterintuitive possibility that higher rates of compensation actually lower the quality of labor. Machlup argued that the person who prefers to do something other than inventing and does so only under the seductive lure of more money is likely to be less gifted than one who doesn't. This is the "vocation" argument dressed up in econometric equations; at some point, the recruits are going to reduce22 the average quality of the inventing "army." This is true at some point; doctors who cure only for money may be less successful than those who have a true calling. The trick is figuring out what point. There will indeed always be amateur inventors (in the original meaning: those who invent out of love), and they may well spend as much time mastering their inventive skills as any professional. But they will also always be fairly thin on the ground compared to the population as a whole.

He also examined the behavior of inventors as an element of what economists call input-output a.n.a.lysis. Input-output a.n.a.lysis creates snapshots of entire economies by showing how the output of one economic activity is the input of another: farmers selling wheat to bakers who sell bread to blacksmiths who sell plows back to the farmers. Harvesting, baking, and forging, respectively, are "production functions": the lines on a graph that represent one person adding value and selling it to another. In Machlup's exercise,23 the supply of inventors (or inventive labor) was the key input; the production function was the transformation of such labor into a commercially useful invention; and the supply of inventions was the output. As always, the equation included a simplifying a.s.sumption, and in this case, it was a doozy: that one man's labor is worth roughly the same as another's. This particular a.s.sumption gets distorted pretty quickly even in traditional input-output a.n.a.lysis, but it leaps right through the looking gla.s.s when applied to the business of inventing, a fact of which Machlup was keenly aware: "a statement that five hours of Mr. Doakes' time24 [is] equivalent to one hour of Mr. Edison's or two hours of Mr. Bessemer's would sound preposterous."

The invention business is no more immune to the principle of diminis.h.i.+ng returns than any other, and in any economic system, diminis.h.i.+ng returns result anytime a crucial input stays fixed when another one increases. In the case of inventiveness, anytime the store of scientific knowledge (and the number of problems capable of tempting an inventor) isn't increasing, more and more time and resources are required to produce a useful invention. Only the once-in-human-history event known as the Industrial Revolution, because it began the era of continuous invention, had a temporary reprieve from it.

But input-output a.n.a.lysis misses the most important factor of all, which might be called the genius of the system. You only get the one hour of Mr. Edison's time during which he figures out how to make a practical incandescent lightbulb if you also get Mr. Doakes plugging away for five hours at refining the carbonized bamboo filament inside it.

The reason why is actually at the heart of the thing. Mr. Doakes didn't spend those hours because of a simple economic calculus, given the time needed to actually pursue all the variables in all possible frames; Watt's notebooks record months of trying every material under the sun to seal the first boiler of the separate condenser engine. The return on improving even the inventions of antiquity, given the hours, days, and months required and the other demands on the inventor's time, must have been poor indeed. Mr. Doakes spent the time playing the game because he dreamed of winning it.

Which brings us back to James Watt's famous walk on Glasgow Green. The quotation from Watt that opened this chapter appears (not always in the same words) in not only virtually every biography of Watt, but in just about every history of mechanical invention itself, including that of A. P. Usher. Only rarely noted, however, is the fact that Watt's reminiscence first appeared nearly forty years after his death-and was the recollection of two men who heard it from Watt nearly fifty years after the famous walk.

Robert and John Hart were two Glasgow engineers and merchants who regarded James Watt with the sort of awe usually reserved for pop musicians, film stars, or star athletes. Or even more: They regarded him "as the greatest and most useful man25 who ever lived." So when the elderly James Watt entered their shop, sometime in 1813, he was welcomed with adoration, and a barrage of questions about the great events of his life, rather like Michael Jordan beset by fans asking for a play-by-play account of the 1989 NBA playoffs. Watt's recollection of the Sunday stroll down Glasgow Green in 1765 comes entirely from this episode. In short, it is not the sort of memory that a skeptic would regard as completely reliable in all its details.

This is to suggest not that Watt's account is inaccurate, but rather that it says something far more significant about the nature of invention. The research emerging from the fields of information theory and neuroscience on the nature of creative insights offer intriguing ideas about what is happening in an individual inventor's brain at the moment of inspiration. Theories about the aSTG, or cerebellum, or anything else, do not, however, explain much about the notable differences between the nature of invention in the eighteenth century and in the eighth; the structure of the individual brain has not, so far as is known, changed in millennia.

On the other hand, the number of brains producing inventive insights has increased. A lot.

This is why the hero wors.h.i.+p of the brothers Hart is more enlightening about the explosion of inventive activity that started in eighteenth-century Britain than their reminiscences. For virtually all of human history, statues had been built to honor kings, soldiers, and religious figures; the Harts lived in the first era that built them to honor builders and inventors. James Watt was an inventor inspired in every way possible, right down to the neurons in his Scottish skull; but he was also, and just as significantly, the inspiration for thousands of other inventors, during his lifetime and beyond. The inscription on the statue of Watt that stood in Westminster Abbey from 1825 until it was moved in 1960 reminded visitors that it was made "Not to perpetuate a name which must endure while the peaceful arts flourish, but to shew that mankind have learned to know those who best deserve their grat.i.tude" (emphasis added).

A nation's heroes reveal its ideals, and the Watt memorial carries an impressive weight of symbolism. However, it must be said that the statue, sculpted by Sir Francis Chantrey in marble, might bear that weight more appropriately if it had been made out of the trademark material of the Industrial Revolution: iron.

* A member of the embarra.s.singly overachieving clan of Hungarian Jews that included Michael's brother, Karl, the economist and author of The Great Transformation, a history of the modern market state (one built on "an almost miraculous improvement in the tools of production," i.e., the Industrial Revolution), and his son, John, the 1986 winner of the n.o.bel Prize in Chemistry.

* A remarkable number of discoveries about the function of brain structures have been preceded by an improbable bit of head trauma.

* In addition to his status as a cheerleader for entrepreneurism-his most famous phrase is undoubtedly the one about the "perennial gale of creative destruction"-Schumpeter was also legendarily hostile to the importance of inst.i.tutions, particularly laws, and especially patent law.

* When the original finally wore out, in 1879, a replica, using many of the same granite stones (and Smeaton's innovative marble dowels and dovetails), was rebuilt in Plymouth in honor of Smeaton.

* This is an explicit reference to Newton's fourth rule of reasoning from Book III of the Principia Mathematica; Smeaton was himself something of an astronomer, and entered the Newtonian world through its calculations of celestial motions.

* The evidence that invention has a Darwinian character is easier to find using the tools of demography than of microbiology, but while the landscape of evolution is large populations, its raw materials are the tiny bits of coded proteins called genes. Bruce Lahn, a geneticist at the University of Chicago, has doc.u.mented an intriguing discontinuity in the evolutionary history of two genes-microcephalin and abnormal spindle-like microcephaly a.s.sociated (ASPM)-which, when damaged, are complicit in some fairly onerous genetic disorders affecting intelligence (including big reductions in the size of cerebellums). That history shows substantial changes that can be dated to roughly 37,000 years ago and 5,800 years ago, which are approximately the dates of language acquisition and the discovery of agriculture. This is the first hard evidence that arguably the two biggest social changes in human history are a.s.sociated with changes in brain size, and presumably function. No such changes dating from the birth of industrialization have been found, or even suspected.

CHAPTER SEVEN.

MASTER OF THEM ALL.

concerning differences among Europe's monastic brotherhoods; the unlikely contribution of the brewing of beer to the forging of iron; the geometry of crystals; and an old furnace made new THE RUINS OF RIEVAULX Abbey sit on a plain surrounded by gently rolling moors not far from the River Wye in the northeast of England. In the years between its founding in 1132 and dissolution in 1536, the abbey's monks farmed more than five thousand acres of productive cropland. In addition to the main building, now a popular tourist stop, Rievaulx included more than seventy outbuildings, spread across a hundred square miles of Yorks.h.i.+re. Some were granges: satellite farms. Others were cottage factories. And half a dozen were iron foundries, which is why Rievaulx Abbey stands squarely astride one of the half-dozen or so parallel roads that led to the steam revolution, and eventually to Rocket. The reason is the monastic brotherhood that founded Rievaulx Abbey, and not at all coincidentally, dominated ironworking (and a dozen other economic activities) in Europe and Britain throughout the medieval period: the Cistercians.

During the eleventh century, the richest and most imitated monastery in Europe was the Benedictine community at Cluny, in Burgundy. The Cluniacs, like all monastic orders, subscribed, in theory anyway, to the sixth-century Rule of Saint Benedict, an extremely detailed manual for a simple life of prayer and penance. In fact, they were "simple" in much the same way that the Vanderbilt mansions in Newport were "cottages." A Cluniac monk was far likelier to be clothed in silk vestments than in the "woolen cowl for winter1 and a thin or worn one for summer" prescribed by the Rule. More important for the monastery of Molesme, near Dijon, was the Cluniac tendency to pick and choose pieces of Benedictine doctrine, and to apply more enthusiasm and discipline to their prayers than to their labors.

This was a significant departure from the order's de facto founder, Saint Benedict, who defined labor as one of the highest virtues, and he wasn't referring to the kind of work involved in constructing a clever logical argument. So widespread was his influence that virtually all the technological progress of the medieval period was fueled by monasticism. The monks of St. Victor's Abbey2 in Paris even included mechanica-the skills of an artisan-in their curriculum. In the twelfth century a German Benedictine and metalworker named Theophilus Presbyter wrote an encyclopedia of machinery ent.i.tled Di diversis artibus; Roger Bacon, the grandfather of experimental science, was a Franciscan, a member of the order founded by Saint Francis of a.s.sisi in part to restore the primacy of humility and hard work.

The Benedictines of Cluny, however, prospered not because of their hard work but because of direct subsidies from secular powers including the kings of France and England and numerous lesser aristocrats. And so, in 1098, the monks of Molesme cleared out, determined to live a purer life by following Benedict's call for ora et labora: prayer and (especially) work. The order, now established at "the desert of Citeaux" (the reference is obscure), whence they took the name "Cistercians," was devoted to the virtues of hard work; and not just hard, but organized. The distinction was the work3 of one of the order's first leaders, an English monk named Stephen Harding, a remarkably skillful executive who instinctively seemed to have understood how to balance the virtues of flexibility and innovation with those of centralization; by inst.i.tuting twice-yearly convocations of dozens (later hundreds) of the abbots who ran local Cistercian monasteries all over Europe, he was able to promote regular sharing of what a twenty-first-century management consultant would call "best practices"-in everything from the cultivation of grapes to the cutting of stone-while retaining direct supervision of both process and doctrine. The result was amazing organizational sophistication, a flexible yet disciplined structure that spread from the Elbe to the Atlantic.

Thanks to the administrative genius of Harding and his successors, the order eventually comprised more than eight hundred monasteries, all over Europe, that contained the era's most productive farms, factories-and ironmongeries. Iron was an even larger contributor to the Cistercians' reputation than their expertise in agriculture or machinery, and was a direct consequence of Harding's decision that because some forms of labor were barred to his monastic brothers, others, particularly metalworking, needed to be actively encouraged. The Cistercian monastery in Velehrad (today a part of the Czech Republic) may have been using waterwheels for ironmaking as early as 1269. By 1330, the Cistercians operated at least a dozen smelters and forges in the Burgundy region, of which the largest (and best preserved today) is the one at Fontenay Abbey: more than 150 feet long by nearly thirty feet wide, still bearing the archaeological detritus of substantial iron production.

Which brings us back to Rievaulx. In 1997, a team of archaeologists4 and geophysicists from the University of Bradford, led by Rob Vernon and Gerry McDonnell, came to north Yorks.h.i.+re in order to investigate twelfth-century ironmaking techniques. This turns out to be a lot more than traditional pick-and-shovel archaeology; since the earth itself has a fair amount of residual iron (and therefore electrical conductivity), calculating the amount and quality of iron produced at any ruin requires extremely sophisticated high-tech instruments, with intimidating names like magnetometers and fluxgate gradiometers, to separate useful information from the random magnetism5 found at a suspected ironworking site. What Vernon and McDonnell found caused quite a stir in the world of technological history, which was that the furnaces in use during the thirteenth century at one of Rievaulx Abbey's iron smelters were producing iron at a level of technical sophistication equivalent to that of eighteenth-century Britain. Evidence from the residual magnetism in the slag piles and pits in the nearby village of Laskill revealed that the smelter in use was not only a relatively sophisticated furnace but was, by the standards of the day, huge: built of stone, at least fifteen feet in diameter, and able to produce consistently high-quality iron in enormous quant.i.ties. In the line that figured in almost every news account of the expedition, Henry VIII's decision to close the monasteries in 1536 (a consequence of his divorce from Catherine of Aragon and his break with Rome) "delayed the Industrial Revolution by two centuries."

Even if the two-century delay was a journalistic exaggeration-the Cistercians in France, after all, were never suppressed, and the order was famously adept at diffusing techniques throughout all its European abbeys-it deserves attention as a serious thesis about the birth of the world's first sustained era of technological innovation. The value of that thesis, of course, depends on the indispensability of iron to the Industrial Revolution, which at first glance seems self-evident.

First glances, however, are a poor subst.i.tute for considered thought. Though the discovery at Laskill is a powerful reminder of the sophistication of medieval technology, the Cistercians' proven ability to produce substantial quant.i.ties of high-quality iron not only fails to prove that they were about to ignite an Industrial Revolution when they were suppressed in the early sixteenth century, it actually demonstrates the opposite-and for two reasons. First, the iron of Laskill and Fontenoy was evidence not of industrialization, but of industriousness. The Cistercians owed their factories' efficiency to their disciplined and cheap workforce rather than any technological innovation; there's nothing like a monastic brotherhood that labors twelve hours a day for bread and water to keep costs down. The sixteenth-century monks were still using thirteenth-century technology, and they neither embraced, nor contributed to, the Scientific Revolution of Galileo and Descartes.

The second reason is even more telling: For centuries, the Cistercian monasteries (and other ironmakers; the Cistercians were leaders of medieval iron manufacturing, but they scarcely monopolized it) had been able to supply all the high-quality iron that anyone could use, but all that iron still failed to ignite a technological revolution. Until something happened to increase demand for iron, smelters and forges, like the waterpower that drove them, sounded a lot like one hand clapping. It would sound like nothing else for-what else?-two hundred years.

THE SEVERN RIVER, THE longest in Britain, runs for more than two hundred miles from its source in the Cambrian Mountains of Wales to its mouth at Bristol Channel. The town of Ironbridge in Shrops.h.i.+re is about midway between mouth and source, just south of the intersection with the Tern River. Today the place is home not only to its eponymous bridge-the world's first to be made of iron-but to the Ironbridge Inst.i.tute, one of the United Kingdom's premier inst.i.tutions for the study of what is known nationally as "heritage management." The Iron Gorge, where the bridge is located, is one of the United Nations World Heritage Sites, along with the Great Wall of China, Versailles, and the Grand Canyon.* The reason is found in the nearby town of Coalbrookdale.

Men were smelting iron in Coalbrookdale6 by the middle of the sixteenth century, and probably long before. The oldest surviving furnace at the site is treated as a pretty valuable piece of world heritage itself. Housed inside a modern gla.s.s pyramid at the Museum of Iron, the "old" furnace, as it is known, is a rough rectangular structure, maybe twenty feet on a side, that looks for all the world like a hypertrophied wood-burning pizza oven. It is built of red bricks still covered with soot that no amount of restoration can remove. When it was excavated, in 1954, the pile of slag hiding it weighed more than fourteen thousand tons, tangible testimony to the century of smelting performed in its hearth beginning in 1709, when it changed the nature of ironmaking forever.

Ironmaking involves a lot more than just digging up a quant.i.ty of iron ore and baking it until it's hot enough to melt-though, to be sure, that's a big part of it. Finding the ore is no great challenge; more than 5 percent of the earth's crust is iron, and about one-quarter of the planet's core is a nickel-iron alloy, but it rarely appears in an obligingly pure form. Most of the ores that can be found in nature are oxides: iron plus oxygen, in sixteen different varieties, most commonly hemat.i.te and magnet.i.te, with bonus elements like sulfur and phosphorus in varying amounts. To make a material useful for weapons, structures, and so on, those oxides and other impurities must be separated from the iron by smelting, in which the iron ore is heated by a fuel that creates a reducing atmosphere-one that removes the oxides from the ore. The usual fuel is one that contains carbon, because when two carbon atoms are heated in the bottom of the furnace in the presence of oxygen-O2-they become two molecules of carbon monoxide. The CO in turn reacts with iron oxide as it rises, liberating the oxygen as carbon dioxide-CO2-and metallic iron.

Fe2O3 + 3CO 2Fe + 3C02 There are a lot of other chemical reactions involved, but that's the big one, since the first step in turning iron ore into a bar of iron is getting the oxygen out; the second one is putting carbon in. And that is a bit more complicated, because the molecular structure of iron-the crystalline shapes into which it forms-changes with heat. At room temperature, and up to about 900C, iron organizes itself into cubes, with an iron atom at each corner and another in the center of the cube. When it gets hotter than 900C, the structure changes into a cube with the same eight iron atoms at the corners and another in the center of each face of the cube; at about 1300C, it changes back to a body-centered crystal. If the transformation takes place in the presence of carbon, carbon atoms take their place in the crystal lattices, increasing the metal's hardness and durability by several orders of magnitude and reducing the malleability of the metal likewise. The percentage of carbon that bonds to iron atoms is the key: If more than 4.5 percent of the final mixture is carbon, the final product is hard, but brittle: good, while molten, for casting, but hard to shape, and far stronger in compression than when twisted or bent. With the carbon percentage less than 0.5 percent, the iron is eminently workable, and becomes the stuff that we call wrought iron. And when the percentage hits the sweet spot of between about 0.5 percent and 1.85 percent, you get steel.

This is slightly more complicated than making soup. The different alloys of carbon and iron, each with different properties, form at different times depending upon the phase transitions between face-centered and body-centered crystalline structures. The timing of those transitions, in turn, vary with temperature, pressure, the presence of other elements, and half a dozen other variables, none of them obvious. Of course, humans were making iron for thousands of years before anyone had anything useful to say about atoms, much less molecular bonds. They were making bronze, from copper and tin, even earlier. During the cultural stage that archaeologists call "the" Iron Age-the definite article is deceptive; Iron Age civilizations appeared in West Africa and Anatolia sometime around 1200 BCE, five hundred years later in northern Europe*-early civilizations weaned themselves from the equally st.u.r.dy bronze (probably because of a shortage of easily mined tin) by using trial and error to combine the ore with heat and another substance, such as limestone (in the jargon of the trade, a flux), which melted out impurities such as silicon and sulfur. The earliest iron furnaces were shafts generally about six to eight feet high and about a foot in diameter, in which the burning fuel could get the temperature up to about 1200C, which was enough for wrought, but not cast, iron.

By the sixteenth century, iron making began to progress beyond folk wisdom and trial and error. The first manuals of metallurgy started to appear in the mid-1500s, most especially De re metallica by the German Georg Bauer, writing under the name Agricola, who described the use of the first European blast furnaces, known in German as Stuckofen, which had hearths roughly five feet long and three feet high, with a foot-deep crucible in the center: A certain quant.i.ty of iron ore7 is given to the master [who] throws charcoal into the crucible, and sprinkles over it an iron shovelful of crushed iron ore mixed with unslaked lime. Then he repeatedly throws on charcoal and sprinkles it with ore, and continues until he has slowly built up a heap; it melts when the charcoal has been kindled and the fire violently stimulated by the blast of the bellows....

Agricola's work was so advanced that it remained at the cutting edge of mining and smelting for a century and a half. The furnaces he described replaced the earlier forges, known as bloomeries, which produced a spongelike combination of iron and slag-a "bloom"-from which the slag could be hammered out, leaving a fairly low-carbon iron that could be shaped and worked by smiths, hence wrought iron.

Though relatively malleable, early wrought iron wasn't terribly durable; okay for making a door, but not nearly strong enough for a cannon. The Stuckofen, or its narrower successor, the blast furnace, however, was built to introduce the iron ore and flux at the top of the shaft and to force air at the bottom. The result, once gravity dropped the fuel through the superheated air, which was "blasted" into the chamber and rose via convection, was a furnace that could actually get hot enough to transform the iron. At about 1500C, the metal undergoes the transition from face-centered to body-centered crystal and back again, absorbing more carbon, making it very hard indeed. This kind of iron-pig iron, supposedly named because the relatively narrow channels emerging from the much wider smelter resembled piglets suckling-is so brittle, however, that it is only useful after being poured into forms usually made of loam, or clay.

Those forms could be in the shape of the final iron object, and quite a few useful items could be made from the cast iron so produced. They could also, and even more usefully, be converted into wrought iron by blowing air over heated charcoal and pig iron, which, counterintuitively, simultaneously consumed the carbon in both fuel and iron, "decarbonizing" it to the <1 percent="" level="" that="" permitted="" shaping="" as="" wrought="" iron="" (this="" is="" known="" as="" the="" "indirect="" method"="" for="" producing="" wrought="" iron).="" the="" cistercians="" had="" been="" doing="" so="" from="" about="" 1300,="" but="" they="" were,="" in="" global="" terms,="" latecomers;="" chinese="" iron="" foundries="" had="" been="" using="" these="" techniques="" two="" thousand="" years="">

Controlling the process that melted, and therefore hardened, iron was an art form, like cooking on a woodstove without a thermostat. It's worth remembering that while recognizably human cultures had been using fire for everything from illumination to s.p.a.ce heating to cooking for hundreds of thousands of years, only potters and metalworkers needed to regulate its heat with much precision, and they developed a large empirical body of knowledge about fire millennia before anyone could figure out why a fire burns red at one temperature and white at another. The clues for extracting iron from highly variable ores were partly texture-a taffylike bloom, at the right temperature, might be precisely what the ironmonger wanted-partly color: When the gases in a furnace turned violet, what would be left behind was a pretty pure bit of iron.*

Purity was important: Ironmakers sought just the right mix of iron and carbon, and knew that any contamination by other elements would spoil the iron. Though they were ignorant of the chemical reactions involved, they soon learned that mineral fuels such as pitcoal, or its predecessor, peat, worked poorly, because they introduced impurities, and so, for thousands of years, the fuel of choice was charcoal. The blast furnace at Rievaulx Abbey used charcoal. So did the one at Fontenay Abbey. And, for at least a century, so did the "old" furnace at Coalbrookdale. Unfortunately, that old furnace, like all its contemporaries, needed a lot of charcoal: The production of 10,000 tons of iron demanded nearly 100,000 acres of forest, which meant that a single seventeenth-century blast furnace could denude more than four thousand acres each year.

Until 1709, and the arrival of Abraham Darby.

ABRAHAM DARBY WAS BORN in a limestone mining region of the West Midlands, in a village with the memorable name of Wren's Nest. He was descended from barons and earls, though the descent was considerable by the time Abraham was born in 1678. His father, a locksmith and sometime farmer, was at least prosperous enough to stake his son to an apprentices.h.i.+p working in Birmingham for a "malter"-a roaster and miller of malt for use in beer and whisky. Abraham's master, Jonathan Freeth, like the Darby family, was a member of the Religious Society of Friends. By the time he was an adult, Darby had been educated in a trade and accepted into a religious community, and it is by no means clear which proved the more important in his life-indeed, in the story of industrialization.

Darby's connection with the Society of Friends-the Quakers-proved its worth fairly early. A latecomer to the confessional mosaic of seventeenth-century England, which included (in addition to the established Anglican church) Mennonites, Anabaptists, Presbyterians, Baptists, Puritans, (don't laugh) Muggletonians and Grindletonians, and thousands of very nervous Catholics, the Society of Friends was less than thirty years old when Darby was born and was illegal until pa.s.sage of the Toleration Act of 1689, one of the many consequences of the arrival of William and Mary (and John Locke) the year before. Darby's Quaker affiliation was to have a number of consequences-the Society's well-known pacifism barred him, for example, from the armaments industry-but the most important was that, like persecuted minorities throughout history, the Quakers took care of their own.

So when Darby moved to Bristol in 1699, after completing his seven years of training with Freeth, he was embraced by the city's small but prosperous Quaker community, which had been established in Bristol since the early 1650s, less than a decade after the movement broke away from the Puritan establishment. The industrious Darby spent three years roasting and milling barley before he decided that bra.s.s, not beer, offered the swiftest path to riches, and in 1702, the ambitious twenty-five-year-old joined a number of other Quakers as one of the princ.i.p.als of the Bristol Bra.s.s Works Company.

For centuries, bra.s.s, the golden alloy of copper and zinc, had been popular all over Britain, first as a purely decorative metal used in tombstones, and then, once the deluge of silver from Spain's New World colonies inundated Europe, as the metal of choice for household utensils and vessels. However, the manufacture of those bra.s.s cups and spoons was a near monopoly of the Netherlands, where they had somehow figured out an affordable way of casting them.

The traditional method for casting bra.s.s used the same kind of forms used in the manufacture of pig iron: either loam or clay. This was fine for the fairly rough needs of iron tools, but not for kitchenware, which was why the process of fine casting in loam-time-consuming, painstaking, highly skilled-made it too costly for the ma.s.s market. This was why the technique was originally developed for more precious metals, such as bronze. Selling kitchenware to working-cla.s.s English families was only practicable if the costs could be reduced-and the Dutch had figured out how. If the Bristol Bra.s.s Works was to compete with imports, it needed to do the same, and Darby traveled across the channel in 1704 to discover how.

The Dutch secret turned out to be8 casting in sand rather than loam or clay, and upon his return, Darby sought to perfect what he had learned in Holland, experimenting rigorously with any number of different sands and eventually settling, with the help of another ironworker and Quaker named John Thomas, on a material and process that he patented in 1708. It is by no means insignificant that the wording of the patent explicitly noted that the novelty of Darby's invention was not that it made more, or better, castings, but that it made them at a lower cost: "a new way of casting iron bellied pots9 and other iron bellied ware in sand only, without loam or clay, by which such iron pots and other ware may be cast fine and with more ease and expedition and may be afforded cheaper than they can by the way commonly used" (emphasis added).

Darby realized something else about his new method. If it worked for the relatively rare copper and zinc used to make bra.s.s, it might also work for far more abundant, and therefore cheaper, iron. The onetime malter tried to persuade his partners of the merits of his argument, but failed; unfortunately for Bristol, but very good indeed for Coalbrookdale, where Darby moved in 1709, leasing the "old furnace." There, his compet.i.tive advantage, in the form of the patent on sand casting for iron, permitted him to succeed beyond expectation. Beyond even the capacity of Coalbrookdale's forests to supply one of the key inputs of ironmaking; within a year, the oak and hazel forests around the Severn were clearcut down to the stumps. Coalbrookdale needed a new fuel.

Abraham Darby wasn't the first to recognize the potential of a charcoal shortage to disrupt iron production. In March 1589, Queen Elizabeth granted one of those preStatute on Monopolies patents to Thomas Proctor and William Peterson, giving them license "to make iron, steel, or lead10 by using of earth-coal, sea-coal, turf, and peat in the proportion of three parts thereof to one of wood-coal." In 1612, another patent, this one running thirty-one years, was given to an inventor named Simon Sturtevant for the use of "sea-coale or pit-coale" in metalworking; the following year, Sturtevant's exclusive was voided and an ironmaster named John Rovenson was granted the "sole priviledge to make iron11 ... with sea-cole, pit-cole, earth-cole, &c."

Darby wasn't even the first in his own family to recognize the need for a new fuel. In 1619, his great-uncle (or, possibly, great-great-uncle; genealogies for the period are vague), Edward Sutton, Baron Dudley paid a license fee to Rovenson for the use of his patent and set to work turning coal plus iron into gold. In 1622, Baron Dudley patented something-the grant, which was explicitly exempted when Edward c.o.ke's original Statute on Monopolies took force a year later, recognized that Dudley had discovered "the mystery, art, way, and means,12 of melting of iron ore, and of making the same into cast works or bars, with sea coals or pit coals in furnaces, with bellows"-but the actual process remained, well, mysterious. Forty years later, in 1665, Baron Dudley's illegitimate son, the unfortunately named Dud Dudley, described, in his self-aggrandizing memoir, Dud Dudley's Metallum martis, their success in using pitcoal to make iron. He did not, however, describe how they did it, and the patents of the period are even vaguer than the genealogies. What can be said for certain is that both Dudleys recognized that iron production was limited by the fact that it burned wood far faster than wood can be grown.*

In the event, the younger Dudley13 continued to cast iron in quant.i.ties that, by 1630, averaged seven tons a week, but politics started to occupy more of his attention. He served as a royalist officer during the Civil War, thereby backing the losing side; in 1651, while a fugitive under sentence of death for treason, and using the name Dr. Hunt, Dudley spent 700 to build a bloomery. His partners, Sir George Horsey,14 David Ramsey, and Roger Foulke, however, successfully sued him, using his royalist record against him, taking both the bloomery and what remained of "Dr. Hunt's" money.

Nonetheless, Dudley continued to try to produce high-quality iron with less (or no) charcoal, both alone and with partners. Sometime in the 1670s, he joined forces with a newly made baronet named Clement Clerke, and in 1693 the "Company for Making Iron with Pitcoal" was chartered, using a "work for remelting and casting15 old Iron with sea cole [sic]." The goal, however, remained elusive. Achieving it demanded the sort of ingenuity and "useful and reliable knowledge" acquired as an apprentice and an artisan. In Darby's case, it was a specific and unusual bit of knowledge, dating back to his days roasting malt.

As it turned out, the Shrops.h.i.+re countryside that had been providing the furnace at Coalbrookdale with wood was also rich in pitcoal. No one, however, had used it to smelt iron because, Dudley and Clerke's attempts notwithstanding, the impurities, mostly sulfur, that it caused to become incorporated into the molten iron made for a very brittle, inferior product. For similar reasons, coal is an equally poor fuel choice for roasting barley malt, since while Londoners would-complainingly-breathe sulfurous air from coal-fueled fireplaces, they weren't about to drink beer that tasted like rotten eggs. The answer, as Abraham Darby had every reason to know, was c.o.ke.

c.o.ke is what you get when soft, bituminous coal is baked in a very hot oven to draw off most of the contaminants, primarily sulfur. What is left behind is not as pure as charcoal, but is far cleaner than pitcoal, and while it was therefore not perfect for smelting iron, it was a lot cheaper than the rapidly vanis.h.i.+ng store of wood. Luckily for Darby,16 both the ore and the c.o.ke available in Shrops.h.i.+re were unusually low in sulfur and therefore minimized the usual problem of contamination that would otherwise have made the resulting iron too brittle.

Using c.o.ke offered advantages other than low cost. The problem with using charcoal as a smelting fuel, even when it was abundant, was that a blast furnace needs to keep iron ore and fuel in contact while burning in order to incorporate carbon into the iron's molecular lattice. Charcoal, however, crushes relatively easily, which meant that it couldn't be piled very high in a furnace before it turned to powder under its own weight. This, in turn, put serious limits on the size of any charcoal-fueled blast furnace.

Those limits vanished with Darby's decision in 1710 to use c.o.ke, the cakes of which were already compressed by the baking process, in the old furnace at Coalbrookdale. And indeed, the first line in any biography of Abraham Darby will mention the revolutionary development that c.o.ke represents in the history of industrialization. But another element of Darby's life helps even more to illuminate the peculiarly English character of the technological explosion that is the subject of this book.

The element remains, unsurprisingly, iron.

IN THE DAYS BEFORE modern quality control, the process of casting iron was highly problematic, since iron ore was as variable as fresh fruit. Its quality depended largely on the other metals bound to it, particularly the quant.i.ty of silicon and quality of carbon. Lacking the means to a.n.a.lyze those other elements chemically, iron makers instead categorized by color. Gray iron contains carbon in the form of graphite (the stuff in pencils) and is pretty good as a casting material; the carbon in white iron is combined with other elements (such as sulfur, which makes iron pyrite, or marcasite) that make it far more brittle. The cla.s.sification of iron, in short, was almost completely empirical. Two men did the decisive work in establis.h.i.+ng a scale that was so accurate that it established pretty much the same ten grades used today. One was Abraham Darby; the other was a Frenchman: Rene Antoine de Reaumur.

Reaumur was a gentleman scientist very much in the mold of Robert Boyle. Like Boyle, he was born into the aristocracy, was a member of the "established"-i.e. Catholic-church, was educated at the finest schools his nation could offer, including the University of Paris, and, again like Boyle at the Royal Society, was one of the first members of the French Academie. His name survives most prominently in the thermometric system he devised in 1730, one that divided the range between freezing and boiling into eighty degrees; the Reaumur scale stayed popular throughout Europe until the nineteenth century, and the incorporation of the Celsius scale into the metric system.* His greatest contribution to metallurgical history17 was his 1722 insight that the structure of iron was a function of the properties of the other metals with which it was combined, particularly sulfur-an insight he, like Darby, used to cla.s.sify the various forms of iron.

Unlike Darby, however, he was a scientist before he was an inventor, and long before he was an entrepreneur or even on speaking terms with one. It is instructive that when the government of France, under Louis XV's minister Cardinal de Fleury, made a huge investment in the development of "useful knowledge" (they used the phrase), Reaumur was awarded a huge pension for his discoveries in the grading of iron-and he turned it down because he didn't need it.

Scarcely any two parallel lives do more to demonstrate the differences between eighteenth-century France and Britain: the former a national culture with a powerful affection for pure over applied knowledge, the latter the first nation on earth to give inventors the legally sanctioned right to exploit their ideas. It isn't, of course, that Britain didn't have its own Reaumurs-the Royal Society was full of skilled scientists uninterested in any involvement in commerce-but rather that it also had thousands of men like Darby: an inventor and engineer who cared little about scientific glory but a whole lot about pots and pans.

IF THE CAST IRON used for pots and pans was the most mundane version of the element, the most sublime was steel. As with all iron alloys, carbon is steel's critical component. In its simplest terms, wrought iron has essentially no minimum amount of carbon, just as there is no maximum carbon content for cast iron. As a result, the recipe for either has a substantial fudge factor. Not so with steel. Achieving steel's unique combination of strengths demands a very narrow range of carbon: between 0.25 percent and a bit less than 2 percent. For centuries* this has meant figuring out how to initiate the process whereby carbon insinuates itself into iron's crystalline structure, and how to stop it once it achieves the proper percentage. The techniques used have ranged from the monsoon-driven wind furnaces of south Asia to the quenching and requenching of white-hot iron in water, all of which made steelmaking a boutique business for centuries: good for swords and other edged objects, but not easy to scale up for the production of either a few large pieces or many smaller ones. By the eighteenth century, the most popular method for steelmaking was the cementation process, which stacked a number of bars of wrought iron in a box, bound them together, surrounded them with charcoal, and heated the iron at 1,000 for days, a process that forced some of the carbon into a solid solution with the iron. The resulting high carbon "blister" steel was expensive, frequently excellent, but, since the amount of carbon was wildly variable, inconsistent.

Inconsistently good steel was still better than no steel at all. A swordsman would be more formidable with a weapon made of the "jewel steel" that the j.a.panese call tamahagane than with a more ordinary alloy, but either one is quite capable of dealing mayhem. Consistency gets more important as precision becomes more valuable, which means that if you had to imagine where consistency in steel manuf

Click Like and comment to support us!

RECENTLY UPDATED NOVELS

About The Most Powerful Idea in the World Part 4 novel

You're reading The Most Powerful Idea in the World by Author(s): William Rosen. This novel has been translated and updated at LightNovelsOnl.com and has already 821 views. And it would be great if you choose to read and follow your favorite novel on our website. We promise you that we'll bring you the latest novels, a novel list updates everyday and free. LightNovelsOnl.com is a very smart website for reading novels online, friendly on mobile. If you have any questions, please do not hesitate to contact us at [email protected] or just simply leave your comment so we'll know how to make you happy.