Medical Nemesis:
The Expropriation of Health

by

Ivan Illich



Social Iatrogenesis



2 – The Medicalization of Life

Political Transmission of Iatrogenic Disease

Until recently medicine attempted to enhance what occurs in nature. It fostered the tendency of wounds to heal, of blood to clot, and of bacteria to be overcome by natural immunity. Now medicine tries to engineer the dreams of reason. Oral contraceptives, for instance, are prescribed “to prevent a normal occurrence in healthy persons.” Therapies induce the organism to interact with molecules or with machines in ways for which there is no precedent in evolution. Grafts involve the outright obliteration of genetically programmed immunological defenses. The relationship between the interest of the patient and the success of each specialist who manipulates one of his “conditions” can thus no longer be assumed; it must now be proved, and the net contribution of medicine to society's burden of disease must be assessed from without the profession. But any charge against medicine for the clinical damage it causes constitutes only the first step in the indictment of pathogenic medicine. trail beaten in the harvest is only a reminder of the greater damage done by the baron to the village that his hunt overruns.

Social Iatrogenesis

Medicine undermines health not only through direct aggression against individuals but also through the impact of its social organization on the total milieu. When medical damage to individual health is produced by a sociopolitical mode of transmission, I will speak of “social iatrogenesis,” a term designating all impairments to health that are due precisely to those socio-economic transformations which have been made attractive, possible, or necessary by the institutional shape health care has taken.

Social iatrogenesis designates a category of etiology that encompasses many forms. It obtains when medical bureaucracy creates ill-health by increasing stress, by multiplying disabling dependence, by generating new painful needs, by lowering the levels of tolerance for discomfort or pain, by reducing the leeway that people are wont to concede to an individual when he suffers, and by abolishing even the right to self-care. Social iatrogenesis is at work when health care is turned into a standardized item, a staple; when all suffering is “hospitalized” and homes become inhospitable to birth, sickness, and death; when the language in which people could experience their bodies is turned into bureaucratic gobbledegook; or when suffering, mourning, and healing outside the patient role are labeled a form of deviance.

 Medical Monopoly

Like its clinical counterpart, social iatrogenesis can escalate from an adventitious feature into an inherent characteristic of the medical system. When the intensity of biomedical intervention crosses a critical threshold, clinical iatrogenesis turns from error, accident, or fault into an incurable perversion of medical practice. In the same way, when professional autonomy degenerates into a radical monopoly and people are rendered impotent to cope with their milieu, social iatrogenesis becomes the main product of the medical organization.

A radical monopoly goes deeper than that of any one corporation or any one government. It can take many forms. When cities are built around vehicles, they devalue human feet; when schools pre-empt learning, they devalue the autodidact; when hospitals draft all those who are in critical condition, they impose on society a new form of dying. Ordinary monopolies corner the market; radical monopolies disable people from doing or making things on their own. The commercial monopoly restricts the flow of commodities; the more insidious social monopoly paralyzes the output of nonmarketable use-values. Radical monopolies impinge still further on freedom and independence. They impose a society-wide substitution of commodities for use-values by reshaping the milieu and by “appropriating” those of its general characteristics which have enabled people so far to cope on their own. Intensive education turns autodidacts into unemployables, intensive agriculture destroys the subsistence farmer, and the deployment of police undermines the community's self-control. The malignant spread of medicine has comparable results: it turns mutual care and self-medication into misdemeanors or felonies. Just as clinical iatrogenesis becomes medically incurable when it reaches a critical intensity and then can be reversed only by a decline of the enterprise, so can social iatrogenesis be reversed only by political action that retrenches professional dominance.

A radical monopoly feeds on itself. Iatrogenic medicine reinforces a morbid
society in which social control of the population by the medical system turns into a
principal economic activity. It serves to legitimize social arrangements into which
many people do not fit. It labels the handicapped as unfit and breeds ever new
categories of patients. People who are angered, sickened, and impaired by their
industrial labor and leisure can escape only into a life under medical supervision
and are thereby seduced or disqualified from political struggle for a healthier world.

Social iatrogenesis is not yet accepted as a common etiology of disease. If it were recognized that diagnosis often serves as a means of turning political complaints against the stress of growth into demands for more therapies that are just more of its costly and stressful outputs, the industrial system would lose one of its major defenses. At the same time, awareness of the degree to which iatrogenic ill-health is politically communicated would shake the foundations of medical power much more profoundly than any catalogue of medicine's technical faults.

 Value-free Cure?

The issue of social iatrogenesis is often confused with the diagnostic authority of the healer. To defuse the issue and to protect their reputation, some physicians insist on the obvious: namely, that medicine cannot be practiced without the iatrogenic creation of disease. Medicine always creates illness as a social state. The recognized healer transmits to individuals the social possibilities for acting sick. Each culture has its own characteristic perception of disease and thus its unique hygienic mask. Disease takes its features from the physician who casts the actors into one of the available roles. To make people legitimately sick is as implicit in the physician's power as the poisonous potential of the remedy that works. The medicine man commands poisons and charms. The Greeks' only word for “drug”— pharmakon— did not distinguish between the power to cure and the power to kill.

Medicine is a moral enterprise and therefore inevitably gives content to good and evil. In every society, medicine, like law and religion, defines what is normal, proper, or desirable. Medicine has the authority to label one man's complaint a legitimate illness, to declare a second man sick though he himself does not complain, and to refuse a third social recognition of his pain, his disability, and even his death. It is medicine which stamps some pain as “merely subjective,”  some impairment as malingering, and some deaths—though not others—as suicide. The judge determines what is legal and who is guilty. The priest declares what is holy and who has broken a taboo. The physician decides what is a symptom and who is sick. He is a moral entrepreneur, charged with inquisitorial powers to discover certain wrongs to be righted. Medicine, like all crusades, creates a new group of outsiders each time it makes a new diagnosis stick. Morality is as implicit in sickness as it is in crime or in sin.

In primitive societies it is obvious that in the exercise of medical skill, the recognition of moral power is implied. Nobody would summon the medicine man unless he conceded to him the skill of discerning evil spirits from good ones. In a higher civilization this power expands. Here medicine is exercised by full-time specialists who control large populations by means of bureaucratic institutions. These specialists form professions which exercise a unique kind of control over their own work. Unlike unions, these professions owe their autonomy to a grant of confidence rather than to victory in a struggle. Unlike guilds, which determine only who shall work and how, they determine also what work shall be done. In the United States the medical profession owes this supreme authority to a reform of the medical schools just before World War I. The medical profession is a manifestation in one particular sector of the control over the structure of class power which the university-trained elites have acquired. Only doctors now “know” what constitutes sickness, who is sick, and what shall be done to the sick and to those whom they consider at a special risk. Paradoxically, Western medicine, which has insisted on keeping its power apart from law and religion, has now expanded it beyond precedent. In some industrial societies social labeling has been medicalized to the point where all deviance has to have a medical label. The eclipse of the explicit moral component in medical diagnosis has thus invested Aesculapian authority with totalitarian power.

The divorce between medicine and morality has been defended on the ground that medical categories, unlike those of law and religion, rest on scientific foundations exempt from moral evaluation. Medical ethics have been secreted into a specialized department that brings theory into line with actual practice. The courts and the law, when they are not used to enforce the Aesculapian monopoly, are turned into doormen of the hospital who select from among the clients those who can meet the doctors' criteria. Hospitals turn into monuments of narcissistic scientism, concrete manifestations of those professional prejudices which were fashionable on the day their cornerstone was laid and which were often outdated when they came into use. The technical enterprise of the physician claims value-free power. It is obvious that in this kind of context it is easy to shun the issue of social iatrogenesis with which I am concerned. Politically mediated medical damage is thus seen as inherent in medicine's mandate, and its critics are viewed as sophists trying to justify lay intrusion into the medical bailiwick. Precisely for this reason, a lay review of social iatrogenesis is urgent. The assertion of value-free cure and care is obviously malignant nonsense, and the taboos that have shielded irresponsible medicine are beginning to weaken.

The Medicalization of the Budget

The most handy measure of the medicalization of life is the share taken out of a typical yearly income to be spent under doctor's orders. In America before 1950, this was less than a month's income, but by the mid-seventies, the equivalent of between five and seven weeks of the typical worker's earnings were spent on the purchase of medical services. The United States now spends about $95 billion a year for health care, about 8.4 percent of the gross national product in 1975, up from 4.5 percent in 1962. During the past twenty years, while the price index in the United States has risen by about 74 percent, the cost of medical care has escalated by 330 percent. Between 1950 and 1971 public expenditure for health insurance increased tenfold, private insurance benefits increased eightfold, and direct out-of- pocket payments about three-fold. In over-all expenditures other countries such as France and Germany kept abreast of the United States. In all industrial nations — Atlantic, Scandinavian, or East European — the growth rate of the health sector has advanced faster than that of the GNP. Even discounting inflation, federal health outlays increased by more than 40 percent between 1969 and 1974. The medicalization of the national budget, moreover, is not a privilege of the rich: in Colombia, a poor country that notoriously favors its rich, the proportion, as in England, is more than 10 percent.

Some of this has enriched doctors, who until the French Revolution earned their living as artisans. A few always lived well, but more died poor. The proverb “Few lawyers die well, few physicians live well” had its equivalent in most European languages. Now physicians have come to the top, and in capitalist societies this top is high indeed. Yet it would be inaccurate to blame the inflation in medicine on the greed of the medical profession. Much more of the increase has gone to a host of well-titled medical paper-shufflers whom United States universities began to graduate in the fifties: to those with masters' degrees in nursing supervision or with doctorates in hospital administration, and to all the lower ranks on which the new bureaucrats feed. The cost of administering the patient, his files, and the checks he writes and receives can take a quarter out of each dollar on his bill.44 More goes to the bankers; in some cases the so-called “legitimate” administrative costs in the medical health insurance business have risen to 70 percent of the payment made to commercial carriers.

Even more significant is the new prejudice in favor of high-cost hospital care. Since 1950 the cost of keeping a patient for one day in a community hospital in the United States has risen by 500 percent. The bill for patient care in the major university hospitals has risen even faster, tripling in eight years. Administrative costs have exploded, multiplying since 1964 by a factor of 7; laboratory costs have risen by a factor of 5, medical salaries only by a factor of 2. The construction of hospitals now costs in excess of $85,000 per bed, of which two-thirds buys mechanical equipment that is made obsolete within less than ten years. These rates are almost twice those of the cost increases and of the obsolescence prevalent in modern weapons systems. Costs overruns in programs of the Health, Education, and Welfare Department exceed those in the Pentagon. Between 1968 and 1970 Medicaid costs increased three times faster than the number of people served. In the last four years hospital insurance benefits have almost doubled in cost, and physicians' fees have increased almost twice as fast as had been planned. There is no precedent for a similar sustained expansion in any other major sector in the civilian economy. It is therefore ironic that during this unique boom in health care the United States established another “first.” Shortly after the boom started, the life expectancy for adult American males began to decline and is now expected to decline even further. The death rate for American males aged forty-five to fifty-four is comparatively high. Of every 100 males in the United States who turn forty-five only 90 will see their fifty-fifth birthday, while in Sweden 95 will survive the decade.  But Sweden, Germany, Belgium, Canada, France, and Switzerland are now catching up with the United States: both their age-specific death rates for adult males and their global medical costs are shooting up.

The phenomenal rise in cost of health services in the United States has been explained in different ways: some blame irrational planning, others the higher cost of the new gimmicks that people want in hospitals. The most common interpretation at present relates to the growing incidence of prepayment of services. Hospitals register well-insured patients, and rather than providing old products more efficiently and cheaply, are economically motivated to move towards new and increasingly expensive ways of doing things. Changing products rather than higher labor costs, bad administration, or lack of technological progress are blamed for the rise. In this perspective the change in products seems due precisely to the increased insurance coverage which encourages hospitals to provide products more expensive than the customer actually wants, needs, or would have been willing to pay for directly. His out-of-pocket costs appear increasingly modest, even though the services offered by the hospital are more costly. Insurance for high-cost sick-care is thus a self-reinforcing process which invests the providers of care with the control of increasing resources. As an antidote, some critics recommend enlightened cost consciousness on the part of consumers; others, not trusting the self-control of laymen, recommend mechanisms to heighten the cost consciousness of producers. Physicians, they argue, would prescribe more responsibly and less wantonly if they were paid (as are general practitioners in Britain) on a “capitation” basis that provided a fixed amount for the maintenance of their clients rather than a fee for service. But like all other such remedies, capitation enlarges the iatrogenic fascination with the health supply. People forgo their own lives to get as much treatment as they can.

In England the National Health Service has tried, albeit unsuccessfully, to ensure that cost inflation will be less plagued by conspicuous flimflam. The National Health Service Act of 1946 established access to health-care resources for all those in need as a human right. The need was assumed to be finite and quantifiable, the ballot box the best place to decide the total budget for health, and doctors the only ones able to determine the resources that would satisfy the need of each patient. But need as assessed by medical practitioners has proved to be just as extensive in England as anywhere else. The fundamental hope for the success of the English health-care system lay in the belief in the ability of the English to ration supply. Until about 1972 they did so, in the opinion of an author who surveyed British health economics, “by means in their way almost as ruthless—but generally held to be more acceptable—than the ability to pay.” Until that time health care was kept below 6 percent of GNP, 10 percent of public spending. Private practice had shrunk from half of all care to 4 percent. Direct charges to patients were kept at a phenomenally low 5 percent of the cost. But this stern commitment to equality prevented only those astounding misallocations for prestigious gadgetry which provided an easy starting point for public criticism in the United States. Since 1972 the Health Service in Britain has undergone a traumatic change, for complex economic and political reasons. The initial success of the Health Service and the present unique disarray in the system make predictions for the future impossible. Demedicalization of health care is as essential there as elsewhere. Yet curiously, England is also one of the few industrialized countries where the life expectancy of adult males has not yet declined, though the chronic diseases of this group have already shown an increase similar to that observed a decade earlier across the Atlantic.
Information on costs in the Soviet Union is more difficult to come by. The number of physicians and hospital days per capita seems to have doubled between 1960 and 1972, and costs to have increased by about 260 percent. The main claim to superiority of Soviet medicine is still based on “prophylaxis built into the social system itself,” without this affecting the relative volume of disease or care in comparison with other industrial countries of similar development. But the theory that therapeutics would wither away with the state became and has remained heresy since 1932.

Distinct political systems organize pathologies into different diseases and thus create distinct categories of demand, supply, and unmet needs. But no matter how disease is perceived, the cost of treatment rises at comparable rates. The Russians, for instance, limit by decree mental disease requiring hospitalization: they allow only 10 percent of all hospital beds for such cases. But at a given GNP all industrial nations generate the same kind of dependence on the physician, and do so irrespective of their ideology and the nosology these beliefs engender. (Of course, capitalism has proved that it can do so at a much higher social cost.) Everywhere in the mid-seventies the main constraint on professional activity is the necessity to reduce costs.

The proportion of national wealth which is channeled to doctors and expended under their control varies from one nation to another and falls somewhere between one-tenth and one-twentieth of all available funds. But this should lead nobody to believe that health expenditures on the typical citizen in poor countries are anywhere proportionate to the countries' per capita average income. Most people get absolutely nothing. Excepting only the money allocated for treatment of water supplies, 90 percent of all funds earmarked for health in developing countries is spent not for sanitation but for treatment of the sick. From 70 percent to 80 percent of the entire public health budget goes to the cure and care of individuals as opposed to public health services.66 Most of this money is spent everywhere on the same kinds of things.

All countries want hospitals, and many want them to have the most exotic modern equipment. The poorer the country, the higher the real cost of each item on their inventories. Modern hospital beds, incubators, laboratories, respirators, and operating rooms cost even more in Africa than their counterparts in Germany or France where they are manufactured: they also break down more easily in the tropics, are more difficult to service, and are more often than not out of use. As to cost, the same is true of the physicians who are made to measure for these gadgets. The education of an open-heart surgeon represents a comparable capital investment, whether he comes from the Mexican school system or is the cousin of a Brazilian captain sent on a government scholarship to study in Hamburg. The United States might be too poor to provide renal dialysis at $15,000 per year to all those citizens who would claim to need it, but Ghana is too poor to provide the people equitably with physicians for primary care. Socially critical maximum cost of items that can be equitably shared varies from one place to another. But whenever tax funds are used to finance treatment above the critical cost, the system of medical care acts inevitably as a device for the net transfer of power from the majority who pay the taxes to the few who are selected because of their money, schooling, or family ties, or because of their special interest to the experimenting surgeon.

It is clearly a form of exploitation when four-fifths of the real cost of private clinics in poor Latin American countries is paid for by the taxes collected for medical education, public ambulances, and medical equipment. In this case the concentration of public resources on a few is obviously unjust because the ability to pay out of pocket a fraction of the total cost of treatment is a condition for getting the rest underwritten. But the exploitation is no less in places where the public, through a national health service, assigns to physicians the sole power to decide who “needs” their kind of treatment, and then lavishes public support on those on whom they experiment or practice. The public acquiescence in the doctor's monopoly on identifying needs only broadens the base from which doctors can sell their services.

Indirectly, conspicuous therapies serve as powerful devices to convince people that they should pay more taxes to get them to all those whom doctors have declared in need. Once President Frei of Chile had started on one palace for medical spectator-sports, his successor, Salvador Allende, was forced to promise three more. The prestige of a puny national team in the medical Olympics is used to intensify a nationwide addiction to therapeutic relationships that are pathogenic on a level much deeper than mere medical vandalism. More health damage is caused by people's belief that they cannot cope with their illness unless they call on the doctor than doctors could ever cause by foisting their ministrations on people.

Only in China—at least, at first sight—does the trend seem to run in the opposite direction: primary care is given by nonprofessional health technicians assisted by health apprentices who leave their regular jobs in the factory when they are called on to assist a member of their brigade. Nutrition, environmental hygiene, and birth control have improved beyond comparison. The achievements in the Chinese health sector during the late sixties have proved, perhaps definitively, a long- debated point: that almost all demonstrably effective technical health devices can be taken over within months and used competently by millions of ordinary people. Despite such successes, an orthodox commitment to Western dreams of reason in Marxist shape may now destroy what political virtue, combined with traditional pragmatism, has achieved. The bias towards technological progress and centralization is reflected already in the professional reaches of medical care. China possesses not only a paramedical system but also medical personnel whose educational standards are known to be of the highest order by their counterparts around the world, and which differ only marginally from those of other countries. Most investment during the last four years seems to have gone towards the further development of this extremely well qualified and highly orthodox medical profession, which is getting increasing authority to shape the over-all health goals of the nation. “Barefoot medicine” is losing its makeshift, semi-independent, grassroots character and is being integrated into a unitary health-care technocracy. University-trained personnel instruct, supervise, and complement the locally elected healer. This ideologically fueled development of professional medicine in China will have to be consciously limited in the very near future if it is to remain a balancing complement rather than an obstacle to high-level self-care. Without comparable statistics, statements on Chinese medical economy remain vague. But there is no reason to believe that cost increases in pharmaceutical, hospital, and professional medicine in China are less than in other countries. For the time being, however, it can be argued that in China modern medicine in rural districts was so scarce that recent increments contributed significantly to health levels and to increased equity in access to care.
In all countries the medicalization of the budget is related to well-recognized exploitation within the class structure. No doubt, the dominance of capitalist oligarchies in the United States, the superciliousness of the new mandarins in Sweden, the servility and ethnocentrism of Moscow professionals, and the lobby of the American Medical and Pharmaceutical Associations, as well as the new rise of union power in the health sector, are all formidable obstacles to a distribution of resources in the interests of the sick rather than of their self-appointed caretakers. But the fundamental reason why these costly bureaucracies are health-denying lies not in their instrumental but in their symbolic function: they all stress delivery of repair and maintenance services for the human component of the megamachine, and criticism that proposes better and more equitable delivery only reinforces the social commitment to keep people at work in sickening jobs. The war between the proponents of unlimited national health insurance and those who stand up for national health maintenance, as well as the war between those defending and those attacking all private practice, shifts public attention from the damage done by doctors who protect a destructive social order to the fact that doctors do less than expected in defense of a consumer society.

Beyond a certain encroachment on the budget, money that expands medical control over space, schedules, education, diet, or the design of machines and goods will inevitably unleash a “nightmare forged from good intentions.” Money may always threaten health. Too much money corrupts it. Beyond a certain point, what can produce money or what money can buy restricts the range of self-chosen “life.” Not only production but also consumption stresses the scarcity of time, space, and choice. Therefore the prestige of medical staples must sap the cultivation of health, which, within a given environment, to a large extent depends on innate and inbred mettle. The more time, toil, and sacrifice spent by a population in producing medicine as a commodity, the larger will be the by-product, namely, the fallacy that society has a supply of health locked away which can be mined and marketed. The negative function of money is that of an indicator of the devaluation of goods and services that cannot be bought. The higher the price tag at which well-being is commandeered, the greater will be the political prestige of an expropriation of personal health.

The Pharmaceutical Invasion

Doctors are not needed to medicalize a society's drugs. Even without too many hospitals and medical schools a culture can become the prey of a pharmaceutical invasion. Each culture has its poisons, its remedies, its placebos, and its ritual settings for their administration. Most of these are destined for the healthy rather than for the sick.86 Powerful medical drugs easily destroy the historically rooted pattern that fits each culture to its poisons; they usually cause more damage than profit to health, and ultimately establish a new attitude in which the body is perceived as a machine run by mechanical and manipulating switches.

In the 1940s few of the prescriptions written in Houston or Madrid could have been filled in Mexico, except in the zona rosa of Mexico City, where international pharmacies flourish alongside boutiques and hotels. Today Mexican village drugstores offer three times as many items as drugstores in the United States. In Thailand  and Brazil, many items that are elsewhere outdated, or illegal surplus and duds, are dumped into pharmacies by manufacturers who sail under many flags of convenience. In the past decade, while a few rich countries began to control the damage, waste, and exploitation caused by the licit drug-pushing of their doctors, physicians in Mexico, Venezuela, and even Paris had more difficulty than ever before in getting information on the side-effects of the drugs they prescribed. Only ten years ago, when drugs were relatively scarce in Mexico, people were poor, and most sick persons were attended by grandmother or the herbalist, pharmaceuticals came packaged with a descriptive leaflet. Today drugs are more plentiful, more powerful, and more dangerous; they are sold by television and radio; people who have attended school feel ashamed of their lingering trust in the Aztec curer; and the leaflet has been replaced by one standard note which says “on prescription.” The fiction which is meant to exorcise the drug by medicalizing it in fact only confounds the buyer. The warning to consult a doctor makes the buyer believe he is incompetent to beware. In most countries of the world, doctors are simply not well enough spread out to prescribe double-edged medicine each time it is indicated, and most of the time they themselves are not prepared, or are too ignorant, to prescribe with due prudence. As a consequence the physician's function, especially in poor countries, has become trivial: he has been turned into a routine prescription machine that is constantly ridiculed, and most people now take the same drugs, just as haphazardly, but without his approval.

Chloramphenicol is a good example of the way reliance on prescription can be useless for the protection of patients and can even promote abuse. During the 1960s this drug was packaged as Chloromycetin by Parke-Davis and brought in about one-third of the company's over-all profits. By then it had been known for several years that people who take this drug stand a certain chance of dying of aplastic anemia, an incurable disease of the blood. Typhoid is almost the only disease that, with serious qualifications, does justify the taking of this substance. Through the late fifties and early sixties, Parke-Davis, notwithstanding strong clinical contraindications, spent large sums to promote their winner. Doctors in the United States prescribed chloramphenicol to almost four million people per year to treat them for acne, sore throat, the common cold, and even such trifles as infected hangnail. Since typhoid is rare in the United States, no more than one in 400 of those given the drug “needed” the treatment. Unlike thalidomide, which disfigures, chloramphenicol kills: it puts its victims out of sight, and hundreds of them in the United States died undiagnosed.

Self-control by the profession on such matters has never worked, and medical memories have proved particularly short. The best one can say is that in Holland or Norway or Denmark, self-regulation has at certain moments been less ineffective than in Germany or France  or Italy, and that American doctors have a particular facility for admitting past mistakes and jumping on new bandwagons. In the United States in the fifties, control over drugs by regulatory agencies was at a low ebb and self-control was nominal. Then, during the sixties, concerned newspapermen, medical men, and politicians launched a campaign that exposed the subservience of physicians and government officials to pharmaceutical firms and described some of the prevalent patterns of white-collar crimes in medicine. Within two months after the exposure at a congressional hearing, the use of chloramphenicol in the United States dwindled. Parke-Davis was forced to insert strict warnings of hazards and cautionary statements about the use of this drug into every package. But these warnings did not extend to exports. The drug continued to be used indiscriminately in Mexico, not only in self-medication but on prescription, thereby breeding a drug-resistant strain of typhoid bacilli which is now spreading from Central America to the rest of the world. One doctor in Latin America who was also a statesman did try to stem the pharmaceutical invasion rather than just enlist physicians to make it look more respectable. During his short tenure as president of Chile, Dr. Salvador Allende quite successfully mobilized the poor to identify their own health needs and much less successfully compelled the medical profession to serve basic rather than profitable needs. He proposed to ban drugs unless they had been tried on paying clients in North America or Europe for as long as the patent protection would run. He revived a program aimed at reducing the national pharmacopeia to a few dozen items, more or less the same as those carried by the Chinese barefoot doctor in his black wicker box. Notably, within one week after the Chilean military junta took power on September 11, 1973, many of the most outspoken proponents of a Chilean medicine based on community action rather than on drug imports and drug consumption had been murdered.

The overconsumption of medical drugs is, of course, not restricted to areas where doctors are scarce or people are poor. In the United States, the volume of the drug business has grown by a factor of 100 during the current century: 20,000 tons of aspirin are consumed per year, almost 225 tablets per person. In England, every tenth night of sleep is induced by a hypnotic drug and 19 percent of women and 9 percent of men take a prescribed tranquilizer during any one year. In the United States, central nervous system agents are the fastest-growing sector of the pharmaceutical market, now making up 31 percent of total sales. Dependence on prescribed tranquilizers has risen by 290 percent since 1962, a period during which the per capita consumption of liquor rose by only 23 percent and the estimated consumption of illegal opiates by about 50 percent. A significant quantity of “uppers” and “downers” is obtained in all countries by circumventing the doctor. Medicalized addiction in 1975 has outgrown all self-chosen or more festive forms of creating well-being.

It has become fashionable to blame multinational pharmaceutical firms for the increase in medically prescribed drug abuse; their profits are high and their control over the market is unique. For fifteen years, drug industry profits (as a percentage of sales and company net worth) have outranked those of all other manufacturing industries listed on the Stock Exchange. Drug prices are controlled and manipulated: the same bottle that sells for two dollars in Chicago or Geneva where it is produced, but where it faces competition, sells for twelve dollars in a poor country where it does not.  The markup, moreover, is phenomenal: forty dollars' worth of diazepam, once stamped into pills and packaged as Valium, sells for a range of high prices, some as much as 70 times that of phenobarbital, which, in the opinion of most pharmacologists, has the same indications, effects, and dangers.  As commodities, prescription drugs behave differently from most other items: they are products that the ultimate consumer rarely selects for himself. The producer's sales efforts are directed at the “instrumental consumer,” the doctor who prescribes but does not pay for the product. To promote Valium, Hoffmann-LaRoche spent $200 million in ten years and commissioned some two hundred doctors a year to produce scientific articles about its properties. In 1973, the entire drug industry spent an average of $4,500 on each practicing physician for advertising and promotion, about the equivalent of the cost of a year in medical school; in the same year, the industry contributed less than 3 percent to the budget of American medical schools.

Surprisingly, however, the per capita use of medically prescribed drugs around the world seems to have little to do with commercial promotion; it correlates mostly with the number of doctors, even in socialist countries where the education of physicians is not influenced by drug industry publicity and where corporate drug- pushing is limited. Over-all drug consumption in industrial societies is not fundamentally affected by the proportion of items sold by prescription, over the counter, or illegally, and it is not affected by whether the purchase is paid for out of pocket, through prepaid insurance, or through welfare funds. In all countries, doctors work increasingly with two groups of addicts: those for whom they prescribe drugs, and those who suffer from their consequences. The richer the community, the larger the percentage of patients who belong to both.

To blame the drug industry for prescribed-drug addiction is therefore as irrelevant as blaming the Mafia for the use of illicit drugs. The current pattern of overconsumption of drugs—be they effective remedy or anodyne; prescription item or part of everyday diet; free, for sale, or stolen—can be explained only as the result of a belief that so far has developed in every culture where the market for consumer goods has reached a critical volume. This pattern is consistent with the ideology of any society oriented towards open-ended enrichment, regardless whether its industrial product is meant for distribution by the presumption of planners or by the forces of the market. In such a society, people come to believe that in health care, as in all other fields of endeavor, technology can be used to change the human condition according to almost any design. Penicillin and DDT, consequently, are viewed as the hors d'oeuvres preceding an era of free lunches. The sickness resulting from each successive course of miracle foods is dealt with by serving still another course of drugs. Thus overconsumption reflects a socially sanctioned, sentimental hankering for yesterday's progress.

The age of new drugs began with aspirin in 1899. Before that time, the doctor himself was without dispute the most important therapeutic agent. Besides opium, the only substances of wide application which would have passed tests for safety and effectiveness were smallpox vaccine, quinine for malaria, and ipecac for dysentery. After 1899 the flood of new drugs continued to rise for half a century. Few of these turned out to be safer, more effective, and cheaper than well-known and long-tested therapeutic standbys, whose numbers grew at a much slower rate. In 1962, when the United States Food and Drug Administration began to examine the 4,300 prescription drugs that had appeared since World War II, only 2 out of 5 were found effective. Many of the new drugs were dangerous, and among those that met FDA standards, few were demonstrably better than those they were meant to replace. Fewer than 98 percent of these chemical substances constitute valuable contributions to the pharmacopeia used in primary care. They include some new kinds of remedies such as antibiotics, but also old remedies which, in the course of the drug age, came to be understood well enough to be used effectively: digitalis, reserpine, and belladonna are examples. Opinions vary about the actual number of useful drugs: some experienced clinicians believe that less than two dozen basic drugs are all that will ever be desirable for 99 percent of the total population; others, that up to four dozen items are optimal for 98 percent.

The age of great discoveries in pharmacology lies behind us. According to the present director of FDA, the drug age began to decline in 1956. Genuinely new drugs have appeared in decreasing numbers, and many which temporarily glittered in Germany, England, or France, where standards are less stringent than in the United States, Sweden, and Canada, were soon forgotten or are remembered with embarrassment. There is not much territory left to explore. Novelties are either “package deals” — fixed-dose combinations — or medical “me-toos” that are prescribed by physicians because they have been well promoted. The seventeen- year protection that the patent law gives to significant newcomers has run out for most. Now anyone can make them, so long as he does not use the original brand names, which are indefinitely protected by trademark laws. Considerable research has so far produced no reason to suspect that drugs marketed under their generic names in the United States are less effective than their brand-named counterparts, which cost from 3 to 15 times more.

The fallacy that society is caught forever in the drug age is one of the dogmas with which medical policy-making has been encumbered: it fits industrialized man. He has learned to try to purchase whatever he fancies. He gets nowhere without transportation or education; his environment has made it impossible for him to walk, to learn, and to feel in control of his body. To take a drug, no matter which and for what reason—is a last chance to assert control over himself, to interfere on his own with his body rather than let others interfere. The pharmaceutical invasion leads him to medication, by himself or by others, that reduces his ability to cope with a body for which he can still care.

Diagnostic Imperialism

In a medicalized society the influence of physicians extends not only to the purse and the medicine chest but also to the categories to which people are assigned. Medical bureaucrats subdivide people into those who may drive a car, those who may stay away from work, those who must be locked up, those who may become soldiers, those who may cross borders, cook, or practice prostitution, those who may not run for the vice-presidency of the United States, those who are dead, those who are competent to commit a crime, and those who are liable to commit one. On November 5, 1766, the Empress Maria Theresa issued an edict requesting the court physician to certify fitness to undergo torture so as to ensure healthy, i.e. “accurate,” testimony; it was one of the first laws to establish mandatory medical certification. Ever since, filling out forms and signing statements have taken up increasingly more medical time. Each kind of certificate provides the holder with a special status based on medical rather than civic opinion. Used outside the therapeutic process, this medicalized status does two obvious things: (1) it exempts the holder from work, prison, military service, or the marriage bond, and (2) it gives others the right to encroach upon the holder's freedom by putting him into an institution or denying him work. In addition, the proliferation of medical certifications can invest school, employment, and politics with opportunities for new therapeutic functions. In a society in which most people are certified as deviants, the milieu for such deviant majorities will come to resemble a hospital. To spend one's life in a hospital is obviously bad for health.

Once a society is so organized that medicine can transform people into patients because they are unborn, newborn, menopausal, or at some other “age of risk,” the population inevitably loses some of its autonomy to its healers. The ritualization of stages in life is nothing new;  what is new is their intense medicalization. The sorcerer or medicine man — as opposed to the malevolent witch — dramatized the progress of an Azande tribesman from one stage of his health to the next. The experience may have been painful, but the ritual was short and it served society in highlighting its own regenerative powers. Lifelong medical supervision is something else. It turns life into a series of periods of risk, each calling for tutelage of a special kind. From the crib to the office and from the Club Mediterranee to the terminal ward, each age-cohort is conditioned by a milieu that defines health for those whom it segregates. Hygienic bureaucracy stops the parent in front of the school and the minor in front of the court, and takes the old out of the home. By becoming a specialized place, school, work, or home is made unfit for most people. The hospital, the modern cathedral, lords it over this hieratic environment of health devotees. From Stockholm to Wichita the towers of the medical center impress on the landscape the promise of a conspicuous final embrace. For rich and poor, life is turned into a pilgrimage through check-ups and clinics back to the ward where it started. Life is thus reduced to a “span,” to a statistical phenomenon which, for better or for worse, must be institutionally planned and shaped. This life-span is brought into existence with the prenatal check-up, when the doctor decides if and how the fetus shall be born, and it will end with a mark on a chart ordering resuscitation suspended. Between delivery and termination this bundle of biomedical care fits best into a city that is built like a mechanical womb. At each stage of their lives people are age-specifically disabled. The old are the most obvious example: they are victims of treatments meted out for an incurable condition.

Most of man's ailments consist of illnesses that are acute and benign — either self- limiting or subject to control through a few dozen routine interventions. For a wide range of conditions, those who are treated least probably make the best progress. “For the sick,” Hippocrates said, “the least is best.” More often than not, the best a learned and conscientious physician can do is convince his patient that he can live with his impairment, reassure him of an eventual recovery or of the availability of morphine at the time when he will need it, do for him what grandmother could have done, and otherwise defer to nature. The new tricks that have frequent application are so simple that the last generation of grandmothers would have learned them long ago had they not been browbeaten into incompetency by medical mystification. Boy-scout training, good-Samaritan laws, and the duty to carry first-aid equipment in each car would prevent more highway deaths than any fleet of helicopter-ambulances. Those other interventions which are part of primary care and which, though they require the work of specialists, have been proved effective on a population basis can be employed more effectively if my neighbor or I feel responsible for recognizing when they are needed and applying first treatment. For acute sickness, treatment so complex that it requires a specialist is often ineffective and much more often inaccessible or simply too late. After twenty years of socialized medicine in England and Wales, doctors get to coronary cases on an average of four hours after the beginning of symptoms, and by this time 50 percent of patients are dead. The fact that modern medicine has become very effective in the treatment of specific symptoms does not mean that it has become more beneficial for the health of the patient.

With some qualifications, the severe limits of effective medical treatment apply not only to conditions that have long been recognized as sickness—rheumatism, appendicitis, heart failure, degenerative disease, and many infectious diseases—but even more drastically to those that have only recently generated demands for medical care. Old age, for example, which has been variously considered a doubtful privilege or a pitiful ending but never a disease, has recently been put under doctor's orders. The demand for old-age care has increased, not just because there are more old people who survive, but also because there are more people who state their claim that their old age should be cured.

The maximum life-span has not changed, but the average life-span has. Life expectancy at birth has increased enormously. Many more children survive, no matter how sickly and in need of a special environment and special care. The life expectancy of young adults is still increasing in some poorer countries. But in rich countries the life expectancy of those between fifteen and forty-five has tended to stabilize because  and the new diseases of civilization kill as many as formerly succumbed to pneumonia and other infections. Relatively more old people are around, and they are increasingly prone to be ill, out of place, and helpless. No matter how much medicine they take, no matter what care is given them, a life expectancy of sixty-five years has remained unchanged over the past century. Medicine just cannot do much for the illness associated with aging, and even less about the process and experience of aging itself. It cannot cure cardiovascular disease, most cancers, arthritis, advanced cirrhosis, not even the common cold. It is fortunate that some of the pain the aged suffer can be lessened. Unfortunately, though, most treatment of the old requiring professional intervention not only tends to heighten their pain but, if successful, also to protract it.

Old age has been medicalized at precisely the historical moment when it has become a more common occurrence for demographic reasons; 28 percent of the American medical budget is spent on the 10 percent of the population who are over sixty-five. This minority is outgrowing the remainder of the population at an annual rate of 3 percent, while the per capita cost of their care is rising 5 to 7 percent faster than the over-all per capita cost. As more of the elderly acquire rights to professional care, opportunities for independent aging decline. More have to seek refuge in institutions. Simultaneously, as more of the elderly are initiated into treatment for the correction of incorrigible impairment or for the cure of incurable disease, the number of unmet claims for old-age services snowballs. If the eyesight of an old woman fails, her plight will not be recognized unless she enters the ”blindness establishment” — one of the eight hundred-odd United States agencies which produce services for the blind, preferably for the young and those who can be rehabilitated for work. Since she is neither young nor of working age, she will receive only a grudging welcome; at the same time, she will have difficulty fitting into the old-age establishment. She will thus be marginally medicalized by two sets of institutions, the one designed to socialize her among the blind, the other to medicalize her decrepitude.

As more old people become dependent on professional services, more people are pushed into specialized institutions for the old, while the home neighborhood becomes increasingly inhospitable to those who hang on. These institutions seem to be the contemporary strategy for the disposal of the old, who have been institutionalized in more frank and arguably less hideous forms by most other societies. The mortality rate during the first year after institutionalization is significantly higher than the rate for those who stay in their accustomed surroundings. Separation from home contributes to the appearance and mortality of many a serious disease. Some old people seek institutionalization with the intention of shortening their lives. Dependence is always painful, and more so for the old. Privilege or poverty in earlier life reaches a climax in modern old age. Only the very rich and the very independent can choose to avoid that medicalization of the end to which the poor must submit and which becomes increasingly intense and universal as the society they live in becomes richer. The transformation of old age into a condition calling for professional services has cast the elderly in the role of a minority who will feel painfully deprived at any relative level of tax-supported privilege. From weak old people who are sometimes miserable and bitterly disappointed by neglect, they are turned into certified members of the saddest of consumer groups, that of the aged programmed never to get enough. What medical labeling has done to the end of life, it has equally done to its beginning. Just as the doctor's power was first affirmed over old age and eventually encroached on early retirement and climacteric, so his authority over the delivery room, which dates from the mid-nineteenth century, spread to the nursery, the kindergarten, and the classroom and medicalized infancy, childhood, and puberty. But while it has become acceptable to advocate limits to the escalation of costly care for the old, limits to so-called medical investments in childhood are still a subject that seems taboo. Industrial parents, forced to procreate manpower for a world into which nobody fits who has not been crushed and molded by sixteen years of formal education, feel impotent to care personally for their offspring and, in despair, shower them with medicine. Proposals to reduce medical outputs in the United States from their present level of about $100 billion to their 1950 level of $10 billion, or to close medical schools in Colombia, never turn into controversial issues because those who make them are soon discredited as heartless proponents of infanticide or of mass extermination of the poor. The engineering approach to the making of economically productive adults has made death in childhood a scandal, impairment through early disease a public embarrassment, unrepaired congenital malformation an intolerable sight, and the possibility of eugenic birth control a preferred theme for international congresses in the seventies.

As for infant mortality, it has indeed been reduced. Life expectancy in the developed countries has increased from thirty-five years in the eighteenth century to seventy years today. This is due mainly to the reduction of infant mortality in these countries; for example, in England and Wales the number of infant deaths per 1,000 live births declined from 154 in 1840 to 22 in 1960. But it would be entirely incorrect to attribute more than one of those lives "saved" to a curative intervention that presupposes anything like a doctor's training, and it would be a delusion to attribute the infant mortality rate of poor countries, which in some cases is ten times that of the United States, to a lack of doctors. Food, antisepsis, civil engineering, and above all, a new widespread disvalue placed on the death of a child, no matter how weak or malformed, are much more significant factors and represent changes that are only remotely related to medical intervention. While in gross infant mortality the United States ranks seventeenth among nations, infant mortality among the poor is much higher than among higher-income groups. In New York City, infant mortality among the black population is more than twice as high as for the population in general, and probably higher than in many underdeveloped areas such as Thailand and Jamaica. The insistence that more doctors are needed to prevent infants from dying can thus be understood as a way of avoiding income equalization while at the same time creating more jobs for professionals. It would be equally reckless to claim that those changes in the general environment that do have a causal relationship to the presence of doctors represent a positive balance for health. Although physicians did pioneer antisepsis, immunization, and dietary supplements, they were also involved in the switch to the bottle that transformed the traditional suckling into a modern baby and provided industry with working mothers who are clients for a factory-made formula.

The damage this switch does to natural immunity mechanisms fostered by human milk and the physical and emotional stress caused by bottle feeding are comparable to if not greater than the benefits that a population can derive from specific immunizations. Even more serious is the contribution the bottle makes to the menace of worldwide protein starvation. For instance, in 1960, 96 percent of Chilean mothers breast-fed their infants up to and beyond the first birthday. Then, for a decade, Chilean women underwent intense political indoctrination by both right-wing Christian Democrats and a variety of left-wing parties. By 1970 only 6 percent breast-fed beyond the first year and 80 percent had weaned their infants before the second full month. As a result, 84 percent of potential human breast milk now remains unproduced. The milk of an additional 32,000 cows would have to be added to Chile's overgrazed pastures to compensate—as far as possible—for this loss. As the bottle became a status symbol, new illnesses appeared among children who had been denied the breast, and since mothers lack traditional know-how to deal with babies who do not behave like sucklings, babies became new consumers of medical attention and of its risks. The sum total of physical impairment due just to this substitution of marketed baby food for mother's milk is difficult to balance against the benefits derived from curative medical intervention in childhood sickness and from surgical correction of birth defects ranging from harelip to heart defects.

It can, of course, be argued that the medical classification of age groups according to their diagnosed need for health commodities does not generate ill-health but only reflects the health-denying breakdown of the family as a cocoon, of the neighborhood as a network of gift relationships, and of the environment as the shelter of a local subsistence community. No doubt, it is true that a medicalized social perception reflects a reality that is determined by the organization of capital- intensive production, and that it is the corresponding social pattern of nuclear families, welfare agencies, and polluted nature that degrades home, neighborhood, and milieu. But medicine does not simply mirror reality; it reinforces and reproduces the process that undermines the social cocoons within which man has evolved. Medical classification justifies the imperialism of standard staples like baby food over mother's milk and of old-age homes over a corner at home. By turning the newborn into a hospitalized patient until he or she is certified as healthy, and by defining grandmother's complaint as a need for treatment rather than for patient respect, the medical enterprise creates not only biologically formulated legitimacy for man-the-consumer but also new pressures for an escalation of the megamachine. Genetic selection of those who fit into that machine is the logical next step of medicosocial control.

Preventive Stigma

As curative treatment focuses increasingly on conditions in which it is ineffectual, expensive, and painful, medicine has begun to market prevention. The concept of morbidity has been enlarged to cover prognosticated risks. Along with sick-care, health care has become a commodity, something one pays for rather than something one does. The higher the salary the company pays, the higher the rank of an aparatchik, the more will be spent to keep the valuable cog well oiled. Maintenance costs for highly capitalized manpower are the new measure of status for those on the upper rungs. People keep up with the Joneses by emulating their “check-ups,” an English word which has entered French, Serbian, Spanish, Malay, and Hungarian dictionaries. People are turned into patients without being sick. The medicalization of prevention thus becomes another major symptom of social iatrogenesis. It tends to transform personal responsibility for my future into my management by some agency.

Usually the danger of routine diagnosis is even less feared than the danger of routine treatment, though social, physical, and psychological torts inflicted by medical classification are no less well documented. Diagnoses made by the physician and his helpers can define either temporary or permanent roles for the patient. In either case, they add to a biophysical condition a social state created by presumably authoritative evaluation. When a veterinarian diagnoses a cow's distemper, it doesn't usually affect the patient's behavior. When a doctor diagnoses a human being, it does. In those instances where the physician functions as healer he confers on the person recognized as sick certain rights, duties, and excuses which have a conditional and temporary legitimacy and which lapse when the patient is healed; most sickness leaves no taint of deviance or disorderly conduct on the patient's reputation. No one is interested in ex-allergies or ex-appendectomy patients, just as no one will be remembered as an ex-traffic offender. In other instances, however, the physician acts primarily as an actuary, and his diagnosis can defame the patient, and sometimes his children, for life. By attaching irreversible degradation to a person's identity, it brands him forever with a
permanent stigma. The objective condition may have long since disappeared, but
the iatrogenic label sticks. Like ex-convicts, former mental patients, people after
their first heart attack, former alcoholics, carriers of the sickle-cell trait, and (until
recently) ex-tuberculotics are transformed into outsiders for the rest of their lives.

Professional suspicion alone is enough to legitimize the stigma even if the suspected
condition never existed. The medical label may protect the patient from
punishment only to submit him to interminable instruction, treatment, and
discrimination, which are inflicted on him for his professionally presumed benefit.
In the past, medicine labeled people in two ways: those for whom cures could be attempted, and those who were beyond repair, such as lepers, cripples, oddities, and the dying. Either way, diagnosis could lead to stigma. Medicalized prevention now creates a third way. It turns the physician into an officially licensed magician whose prophecies cripple even those who are left unharmed by his brews. Diagnosis may exclude a human being with bad genes from being born, another from promotion, and a third from political life. The mass hunt for health risks begins with dragnets designed to apprehend those needing special protection: prenatal medical visits; well-child-care clinics for infants; school and camp check- ups and prepaid medical schemes. Recently genetic and blood pressure “counseling” services were added. The United States proudly led the world in organizing disease-hunts and, later, in questioning their utility.

In the past decade, automated multiphasic health-testing became operational and was welcomed as the poor man's escalator into the world of Mayo and Massachusetts General. This assembly-line procedure of complex chemical and medical examinations can be performed by paraprofessional technicians at a surprisingly low cost. It purports to offer uncounted millions more sophisticated detection of hidden therapeutic needs than was available in the sixties even for the most “valuable” hierarchs in Houston or Moscow. At the outset of this testing, the lack of controlled studies allowed the salesmen of mass-produced prevention to foster unsubstantiated expectations. (More recently, controlled comparative studies of population groups benefitting from maintenance service and early diagnosis have become available; two dozen such studies indicate that these diagnostic procedures —even when followed by high-level medical treatments—have no positive impact on life expectancy.) Ironically, the serious asymptomatic disorders which this kind of screening alone can discover among adults are frequently incurable illnesses in which early treatment only aggravates the patient's physical condition. In any case, it transforms people who feel healthy into patients anxious for their verdict.

In the detection of sickness medicine does two things: it “discovers” new disorders, and it ascribes these disorders to concrete individuals. To discover a new category of disease is the pride of the medical scientist. To ascribe the pathology to some Tom, Dick, or Harry is the first task of the physician acting as member of a consulting profession. Trained to “do something” and express his concern, he feels active, useful, and effective when he can diagnose disease. Though, theoretically, at the first encounter the physician does not presume that his patient is affected by a disease, through a form of fail-safe principle he usually acts as if imputing a disease to the patient were better than disregarding one. The medical-decision rule pushes him to seek safety by diagnosing illness rather than health. The classic demonstration of this bias came in an experiment conducted in 1934. In a survey of 1,000 eleven-year-old children from the public schools of New York, 61 percent were found to have had their tonsils removed. “The remaining 39 percent were subjected to examination by a group of physicians, who selected 45 percent of these for tonsillectomy and rejected the rest. The rejected children were re-examined by another group of physicians, who recommended tonsillectomy for 46 percent of those remaining after the first examination. When the rejected children were examined a third time, a similar percentage was selected for tonsillectomy so that after three examinations only sixty-five children remained who had not been recommended for tonsillectomy. These subjects were not further examined because the supply of examining physicians ran out.” This test was conducted at a free clinic, where financial considerations could not explain the bias.

Diagnostic bias in favor of sickness combines with frequent diagnostic error. Medicine not only imputes questionable categories with inquisitorial enthusiasm; it does so at a rate of miscarriage that no court system could tolerate. In one instance, autopsies showed that more than half the patients who died in a British university clinic with a diagnosis of specific heart failure had in fact died of something else. In another instance, the same series of chest X-rays shown to the same team of specialists on different occasions led them to change their mind on 20 percent of all cases. Up to three times as many patients will tell Dr. Smith that they cough, produce sputum, or suffer from stomach cramps as will tell Dr. Jones. Up to one- quarter of simple hospital tests show seriously divergent results when done from the same sample in two different labs. Nor do machines seem to be any more infallible. In a competition between diagnostic machines and human diagnosticians in 83 cases recommended for pelvic surgery, pathology showed that both man and machine were correct in 22 instances; in 37 instances the computer correctly rejected the doctor's diagnosis; in 11 instances the doctors proved the computer wrong; and in 10 cases both were in error.

In addition to diagnostic bias and error, there is wanton aggression. A cardiac catheterization, used to determine if a patient is suffering from cardiomyopathy— admittedly, this is not done routinely—costs $350 and kills one patient in fifty. Yet there is no evidence that a differential diagnosis based on its results extends either the life expectancy or the comfort of the patient. Most tests are less murderous and much more commonly performed, but many still involve known risks to the individual or his offspring which are high enough to obscure the value of whatever information they can provide. Many routine uses of X-rays and fluoroscope on the young, the injection or ingestion of reagents and tracers, and the use of Ritalin to diagnose hyperactivity in children are examples. Attendance in public schools where teachers are vested with delegated medical powers constitutes a major health risk for children. Even simple and otherwise benign examinations turn into risks when multiplied. When a test is associated with several others, it has considerably greater power to harm than when it is conducted by itself. Often tests provide guidance in the choice of therapy. Unfortunately, as the tests turn more complex and are multiplied, their results frequently provide guidance only in selecting the form of intervention which the patient may survive, and not necessarily that which will help him. Worst of all, when people have lived through complex positive laboratory diagnosis, unharmed or not, they have incurred a high risk of being submitted to therapy that is odious, painful, crippling, and expensive. No wonder that physicians tend to delay longer than laymen before going to see their own doctor and that they are in worse shape when they get there.
Routine performance of early diagnostic tests on large populations guarantees the medical scientist a broad base from which to select the cases that best fit existing treatment facilities or are most useful in the attainment of research goals, whether or not the therapies cure, rehabilitate, or soothe. In the process, people are strengthened in their belief that they are machines whose durability depends on visits to the maintenance shop, and are thus not only obliged but also pressured to foot the bill for the market research and the sales activities of the medical establishment.

Diagnosis always intensifies stress, defines incapacity, imposes inactivity, and focuses apprehension on nonrecovery, on uncertainty, and on one's dependence upon future medical findings, all of which amounts to a loss of autonomy for self- definition. It also isolates a person in a special role, separates him from the normal and healthy, and requires submission to the authority of specialized personnel. Once a society organizes for a preventive disease-hunt, it gives epidemic proportions to diagnosis. This ultimate triumph of therapeutic culture turns the independence of the average healthy person into an intolerable form of deviance.

In the long run the main activity of such an inner-directed systems society leads to the phantom production of life expectancy as a commodity. By equating statistical man with biologically unique men, an insatiable demand for finite resources is created. The individual is subordinated to the greater “needs” of the whole, preventive procedures become compulsory,184 and the right of the patient to withhold consent to his own treatment vanishes as the doctor argues that he must submit to diagnosis, since society cannot afford the burden of curative procedures that would be even more expensive

Terminal Ceremonies

Therapy reaches its apogee in the death-dance around the terminal patient. At a cost of between $500 and $2,000 per day, celebrants in white and blue envelop  what remains of the patient in antiseptic smells. The more exotic the incense and the pyre, the more death mocks the priest. The religious use of medical technique has come to prevail over its technical purpose, and the line separating the physician from the mortician has been blurred.190 Beds are filled with bodies neither dead nor alive. The conjuring doctor perceives himself as a manager of crisis. In an insidious way he provides each citizen at the last hour with an encounter with society's deadening dream of infinite power. Like any crisis manager of bank, state, or couch, he plans self-defeating strategies and commandeers resources which, in their uselessness and futility, seem all the more grotesque. At the last moment, he promises to each patient that claim on absolute priority for which most people regard themselves as too unimportant.

The ritualization of crisis, a general trait of a morbid society, does three things for the medical functionary. It provides him with a license that usually only the military can claim. Under the stress of crisis, the professional who is believed to be in command can easily presume immunity from the ordinary rules of justice and decency. He who is assigned control over death ceases to be an ordinary human. As with the director of a triage, his killing is covered by policy. More important, his entire performance takes place in the aura of crisis. Because they form a charmed borderland not quite of this world, the time-span and the community space claimed by the medical enterprise are as sacred as their religious and military counterparts. Not only does the medicalization of terminal care ritualize macabre dreams and enlarge professional license for obscene endeavors: the escalation of terminal treatments removes from the physician all need to prove the technical effectiveness of those resources he commands. There are no limits to his power to demand more and ever more. Finally, the patient's death places the physician beyond potential control and criticism. In the last glance of the patient and in the life-long perspective of the "morituri" there is no hope, but only the physician's last expectation. The orientation of any institution towards "crisis" justifies enormous ordinary ineffectiveness.

Hospital death is now endemic. In the last twenty-five years the percentage of Americans who die in a hospital has grown by a third. The percentage of hospital deaths in other countries has grown even faster. Death without medical presence becomes synonymous with romantic pigheadedness, privilege, or disaster. The cost of a citizen's last days has increased by an estimated 1,200 percent, much faster than that of over-all health care. Simultaneously, at least in the United States, funeral costs have stabilized; their growth rate has come in line with the rise of the general consumer-price index. The most elaborate phase of the terminal ceremonies now surrounds the dying patient and has been separated, under medical control, from the removal exequies and the burial of what remains. In a switch of lavish expenditure from tomb to ward, reflecting the horror of dying without medical assistance, the insured pay for participation in their own funeral rites.

Fear of unmedicated death was first felt by eighteenth-century elites who refused religious assistance and rejected belief in the afterlife. A new wave of this fear has now swept rich and poor, and has combined with egalitarian pathos to create a new category of goods: those which are "terminally" scarce, because they are commandeered by the physician in high-cost death chambers. To distribute these goods, a new branch of legal  and ethical literature has arisen to deal with the question how to exclude some, select others, and justify choices of life-prolonging techniques and ways of making death more comfortable and acceptable. Taken as a whole, this literature tells a remarkable story about the mind of the contemporary jurist and philosopher. Most of the authors do not even ask whether the techniques that sustain their speculations have in fact proved to be life-prolonging. Naively, they go along with the delusion that ongoing rituals that are costly must be useful. In this way law and ethics bolster belief in the value of policies that regulate politically innocuous medical equality at the point of death.

The modern fear of unhygienic death makes life appear like a race towards a terminal scramble and has broken personal self-confidence in a unique way. It has fostered the belief that man today has lost the autonomy to recognize when his time has come and to take his death into his own hands. The doctor's refusal to recognize the point at which he has ceased to be useful as a healer and to withdraw when death shows on his patient's face has made him into an agent of evasion or outright dissimulation. The patient's unwillingness to die on his own makes him pathetically dependent. He has now lost his faith in his ability to die, the terminal shape that health can take, and has made the right to be professionally killed into a major issue.

Several unexamined expectations are interwoven in the cultural orientation towards death in the wards. People think that hospitalization will reduce their pain or that they will probably live longer in the hospital. Neither is likely to be true. Of those admitted with a fatal condition to the average British clinic, 10 percent died on the day of arrival, 30 percent within a week, 75 percent within a month, and 97 percent within three months. In homes for terminal care, 56 percent were dead within a week of admission. In terminal cancer, there is no difference in life expectancy between those who end in the home and those who die in the hospital. Only a quarter of terminal cancer patients need special nursing at home, and then only during their last weeks. For more than half, suffering will be limited to feeling feeble and uncomfortable, and what pain there is can usually be relieved. But by staying at home they avoid the exile, loneliness, and indignities which, in all but exceptional hospitals, await them. Poor blacks seem to know this and upset the hospital routine by taking their dying home. Opiates are not available on demand. Patients who have severe pains over months or years, which narcotics could make tolerable, are as likely to be refused medication in the hospital as at home, lest they form a habit in their incurable but not directly fatal condition. Finally, people believe that hospitalization increases their chances of surviving a crisis. With some clear-cut exceptions, on this point too, more often than not, they are wrong. More people die now because crisis intervention is hospital-centered than can be saved through the superior techniques the hospital can provide. In the poor countries many more children have died of cholera or diarrhea during the last ten years because they were not rehydrated on time with a simple solution forced down their throats: care was centered on sophisticated intravenous rehydration at a distant hospital. In rich countries the deaths caused by the use of evacuation equipment are beginning to balance the number of lives thus saved. Hospital “worship” is unrelated to the hospital's performance.

Like any other growth industry, the health system directs its products where demand seems unlimited: into defense against death. An increasing percentage of newly acquired tax funds is allocated towards life-extension technology for terminal patients. Complex bureaucracies sanctimoniously select for dialysis maintenance one in six or one in three of those Americans who are threatened by kidney failure. The patient-elect is conditioned to desire the scarce privilege of dying in exquisite torture. As a doctor observes in an account of the treatment of his own illness, much time and effort must go into preventing suicide during the first and sometimes the second year that the artificial kidney may add to life. In a society where the majority die under the control of public authority, the solemnities formerly surrounding legalized homicide or execution adorn the terminal ward. The sumptuous treatment of the comatose takes the place of the doomed man's breakfast in other cultures.

Public fascination with high-technology care and death can be understood as a deep-seated need for the engineering of miracles. Intensive care is but the culmination of a public worship organized around a medical priesthood struggling against death. The willingness of the public to finance these activities expresses a desire for the nontechnical functions of medicine. Cardiac intensive-care units, for example, have high visibility and no proven statistical gain for the care of the sick. They require three times the equipment and five times the staff needed for normal patient care; 12 percent of all graduate hospital nurses in the United States work in this heroic medicine. This gaudy enterprise is supported, like a liturgy of old, by the extortion of taxes, by the solicitation of gifts, and by the procurement of victims. Large-scale random samples have been used to compare the mortality and recovery rates of patients served by these units with those of patients given home treatment. So far they have demonstrated no advantage. The patients who have suffered cardiac infarction themselves tend to express a preference for home care; they are frightened by the hospital, and in a crisis would rather be close to people they know. Careful statistical findings have confirmed their intuition: the higher mortality of those benefitted by mechanical care in the hospital is usually ascribed to fright.

Black Magic

Technical intervention in the physical and biochemical make-up of the patient or of his environment is not, and never has been, the sole function of medical institutions. The removal of pathogens and the application of remedies (effective or not) are by no means the sole way of mediating between man and his disease. Even in those circumstances in which the physician is technically equipped to play the technical role to which he aspires, he inevitably also fulfills religious, magical, ethical, and political functions. In each of these functions the contemporary physician is more pathogen than healer or just anodyne.

Magic or healing through ceremonies is clearly one of the important traditional functions of medicine. In magic the healer manipulates the setting and the stage. In a somewhat impersonal way he establishes an ad hoc relationship between himself and a group of individuals. Magic works if and when the intent of patient and magician coincides, though it took scientific medicine considerable time to recognize its own practitioners as part-time magicians. To distinguish the doctor's professional exercise of white magic from his function as engineer (and to spare him the charge of being a quack), the term “placebo” was created. Whenever a sugar pill works because it is given by the doctor, the sugar pill acts as a placebo. A placebo (Latin for “I will please”) pleases not only the patient but the administering physician as well.

In high cultures, religious medicine is something quite distinct from magic. The major religions reinforce resignation to misfortune and offer a rationale, a style, and a community setting in which suffering can become a dignified performance. The opportunities offered by the acceptance of suffering can be differently explained in each of the great traditions: as karma accumulated through past incarnations; as an invitation to Islam, the surrender to God; or as an opportunity for closer association with the Savior on the Cross. High religion stimulates personal responsibility for healing, sends ministers for sometimes pompous and sometimes effective consolation, provides saints as models, and usually provides a framework for the practice of folk medicine. In our kind of secular society religious organizations are left with only a small part of their former ritual healing roles. One devout Catholic might derive intimate strength from personal prayer, some marginal groups of recent arrivals in Sao Paolo might routinely heal their ulcers in Afro-Latin dance cults, and Indians in the valley of the Ganges still seek health in the singing of the Vedas. But such things have only a remote parallel in societies beyond a certain per capita GNP. In these industrialized societies secular institutions run the major myth-making ceremonies.

The separate cults of education, transportation, and mass communication promote, under different names, the same social myth which  describes as contemporary gnosis. Common to a gnostic world-view and its cult are six characteristics: (1) it is practiced by members of a movement who are dissatisfied with the world as it is because they see it as intrinsically poorly organized. Its adherents are (2) convinced that salvation from this world is possible (3) at least for the elect and (4) can be brought about within the present generation. Gnostics further believe that this salvation depends (5) on technical actions which are reserved (6) to initiates who monopolize the special formula for it. All these religious beliefs underlie the social organization of technological medicine, which in turn ritualizes and celebrates the nineteenth-century ideal of progress.
Among the important nontechnical functions of medicine, a third one is ethical rather than magical, secular rather than religious. It does not depend on a conspiracy into which the sorcerer enters with his adept, nor on myths to which the priest gives form, but on the shape which medical culture gives to interpersonal relations. Medicine can be so organized that it motivates the community to deal in a more or less personal fashion with the frail, the decrepit, the tender, the crippled, the depressed, and the manic. By fostering a certain type of social character, a society's medicine could effectively lessen the suffering of the diseased by assigning an active role to all members of the community in the compassionate tolerance for and the selfless assistance of the weak. Medicine could regulate society's gift relationships. Cultures where compassion for the unfortunate, hospitality for the crippled, leeway for the troubled, and respect for the old have been developed can, to a large extent, integrate the majority of their members into everyday life.

Healers can be priests of the gods, lawgivers, magicians, mediums, barber- pharmacists, or scientific advisers. No common name with even the approximate semantic range covered by our “doctor” existed in Europe before the fourteenth century. In Greece the repairman, used mostly for slaves, was respected early, though he was not on a level with the healing philosopher or even with the gymnast for the free. Republican Rome considered the specialized curers a disreputable lot. Laws on water supply, drainage, garbage removal, and military training, combined with the state cult of healing gods, were considered sufficient; grandmother's brew and the army sanitarian were not dignified by special attention. Until Julius Caesar gave citizenship to the first group of Asclepiads in 46 B.C., this privilege was refused to Greek physicians and healing priests. The Arabs honored the physician; the Jews left health care to the quality of the ghetto or, with a bad conscience, brought in the Arab physician. Medicine's several functions combined in different ways in different roles. The first occupation to monopolize health care is that of the physician of the late twentieth century.

Paradoxically, the more attention is focused on the technical mastery of disease, the larger becomes the symbolic and nontechnical function performed by medical technology. The less proof there is that more money increases survival rates in a given branch of cancer treatment, the more money will go to the medical divisions deployed in that special theater of operations. Only goals unrelated to treatment, such as jobs for the specialists, equal access by the poor, symbolic consolation for patients, or experimentation on humans, can explain the expansion of lung-cancer surgeries during the last twenty-five years. Not only white coats, masks, antiseptics, and ambulance sirens but entire branches of medicine continue to be financed because they have been invested with nontechnical, usually symbolic power.

Willy-nilly the modern doctor is thus forced into symbolic, nontechnical roles. Nontechnical functions prevail in the removal of adenoids: more than 90 percent of all tonsillectomies performed in the United States are technically unnecessary, yet 20 to 30 percent of all children still undergo the operation. One in a thousand dies directly as a consequence of the operation and 16 in a thousand suffer from serious complications. All lose valuable immunity mechanisms. All are subjected to emotional aggression: they are incarcerated in a hospital, separated from their parents, and introduced to the unjustified and more often than not pompous cruelty of the medical establishment. The child learns to be exposed to technicians who, in his presence, use a foreign language in which they make judgments about his body; he learns that his body may be invaded by strangers for reasons they alone know; and he is made to feel proud to live in a country where social security pays for such a medical initiation into the reality of life.

Physical participation in a ritual is not a necessary condition for initiation into the myth which the ritual is organized to generate. Medical spectator sports cast powerful spells. I happened to be in Rio de Janeiro and in Lima when Dr. Christiaan Barnard was touring there. In both cities he was able to fill the major football stadium twice in one day with crowds who hysterically acclaimed his macabre ability to replace human hearts. Medical-miracle treatments of this kind have worldwide impact. Their alienating effect reaches people who have no access to a neighborhood clinic, much less to a hospital. It provides them with an abstract assurance that salvation through science is possible. The experience in the stadium at Rio prepared me for the evidence I was shown shortly afterwards which proved that the Brazilian police have so far been the first to use life-extending equipment in the torture of prisoners. Such extreme abuse of medical techniques seems grotesquely coherent with the dominant ideology of medicine.
The unintended nontechnical influence that medical technique exercises on society's health can, of course, be positive. An unnecessary shot of penicillin can magically restore confidence and appetite. A contraindicated operation can solve a marriage problem and reduce symptoms of disease in both partners. Not only the doctor's sugar pills but even his poisons can be powerful placebos. But this is not the prevailing result of the nontechnical side-effects of medical technology. It can be argued that in precisely those narrow areas in which high-cost medicine has become more specifically effective, its symbolic side-effects have become overwhelmingly health-denying: the traditional white medical magic that supported the patient's own efforts to heal has turned black.

To a large extent, social iatrogenesis can be explained as a negative placebo, as a nocebo effect. Overwhelmingly the nontechnical side-effects of biomedical interventions do powerful damage to health. The intensity of the black-magic influence of a medical procedure does not depend on its being technically effective. The effect of the nocebo, like that of the placebo, is largely independent of what the physician does.
Medical procedures turn into black magic when, instead of mobilizing his self- healing powers, they transform the sick man into a limp and mystified voyeur of his own treatment. Medical procedures turn into sick religion when they are performed as rituals that focus the entire expectation of the sick on science and its functionaries instead of encouraging them to seek a poetic interpretation of their predicament or find an admirable example in some person—long dead or next door —who learned to suffer.
Medical procedures multiply disease by moral degradation when they isolate the sick in a professional environment rather than providing society with the motives and disciplines that increase social tolerance for the troubled. Magical havoc, religious injury, and moral degradation generated under the pretext of a biomedical pursuit are all crucial mechanisms contributing to social iatrogenesis. They are amalgamated by the medicalization of death.

 When doctors first set up shop outside the temples in Greece, India, and China, they ceased to be medicine men. When they claimed rational power over sickness, society lost the sense of the complex personage and his integrated healing which the sorcerer-shaman or curer had provided. The great traditions of medical healing had left the miracle cure to priests and kings. The caste that had an “in” with the gods could call for their intervention. To the hand that wielded the sword was attributed the power to subdue not only the enemy but also the spirit. Up to the eighteenth century the king of England laid his hands every year upon those afflicted with facial tuberculosis whom physicians knew they were unable to cure. Epileptics, whose ills resisted even His Majesty's touch, took refuge in the healing strength that flowed from the hands of the executioner.

With the rise of medical civilization and healing guilds, the physicians distinguished themselves from the quacks and the priests because they knew the limits of their art. Today the medical establishment is about to reclaim the right to perform miracles. Medicine claims the patient even when the etiology is uncertain, the prognosis unfavorable, and the therapy of an experimental nature. Under these circumstances the attempt at a “medical miracle” can be a hedge against failure, since miracles may only be hoped for and cannot, by definition, be expected. The radical monopoly over health care that the contemporary physician claims now forces him to reassume priestly and royal functions that his ancestors gave up when they became specialized as technical healers.

The medicalization of the miracle provides further insight into the social function of terminal care. The patient is strapped down and controlled like a spaceman and then displayed on television. These heroic performances serve as a rain-dance for millions, a liturgy in which realistic hopes for autonomous life are transmuted into the delusion that doctors will deliver health from outer space.

Patient Majorities

Whenever medicine's diagnostic power multiplies the sick in excessive numbers, medical professionals turn over the surplus to the management of nonmedical trades and occupations. By dumping, the medical lords divest themselves of the nuisance of low-prestige care and invest policemen, teachers, or personnel officers with a derivative medical fiefdom. Medicine retains unchecked autonomy in defining what constitutes sickness, but drops on others the task of ferreting out the sick and providing for their treatment. Only medicine knows what constitutes addiction, though policemen are supposed to know how it should be controlled. Only medicine can define brain damage, but it allows teachers to stigmatize and manage the healthy-looking cripples. When the need for a retrenchment of medical goals is discussed in medical literature, it now usually takes the shape of planned patient-dumping. Why should not the newborn and the dying, the ethnocentric, the sexually inadequate, and the neurotic, plus any number of other uninteresting and time-consuming victims of diagnostic fervor, be pushed beyond the frontiers of medicine and be transformed into clients of nonmedical therapeutic purveyors: social workers, television programmers, psychologists, personnel officers, and sex counselors? This multiplication of enabling jobs that hold reflected medical prestige has created an entirely new setting for the role of the sick.
Any society, to be stable, needs certified deviance. People who look strange or who behave oddly are subversive until their common traits have been formally named and their startling behavior slotted into a recognized pigeonhole. By being assigned a name and a role, eerie, upsetting freaks are tamed, becoming predictable exceptions who can be pampered, avoided, repressed, or expelled. In most societies there are some people who assign roles to the uncommon ones; according to the prevalent social prescription, they are usually those who hold special knowledge about the nature of deviance: they decide whether the deviant is possessed by a ghost, ridden by a god, infected by poison, being punished for his sin, or the victim of vengeance wrought by a witch. The agent who does this labeling does not necessarily have to be comparable to medical authority: he may hold juridical, religious, or military power. By naming the spirit that underlies deviance, authority places the deviant under the control of language and custom and turns him from a threat into a support of the social system. Etiology is socially self-fulfilling: if the sacred disease is believed to be caused by divine possession, then the god speaks in the epileptic fit.

Each civilization defines its own diseases. What is sickness in one might be chromosomal abnormality, crime, holiness, or sin in another. Each culture creates its response to disease. For the same symptom of compulsive stealing one might be executed, treated to death, exiled, hospitalized, or given alms or tax money. Here thieves are forced to wear special clothes; there, to do penance; elsewhere, to lose a finger, or again, to be conditioned by magic or by electric shock. To postulate for every society a specifically “sick” kind of deviance with even minimal common characteristics is a hazardous undertaking. The contemporary assignation of sick-roles is of a unique kind. It developed not much more than a generation before Henderson and Parsons analyzed it. defines deviance as the special legitimate behavior of officially selected consumers within an industrial milieu. Even if there were something to say for the thesis that in all societies some people are, so to speak, temporarily put out of service and pampered while being repaired, the context within which this exemption operates elsewhere cannot be compared to that of the welfare state. When he assigns sick-status to a client, the contemporary physician might indeed be acting in some ways similar to the sorcerer or the elder; but in belonging also to a scientific profession that invents the categories it assigns when consulting, the modern physician is totally unlike the healer. Medicine men engaged in the occupation of curing and exercised the art of distinguishing evil spirits from each other. They were not professionals and had no power to invent new devils. Enabling professions in their annual assemblies create the sick-roles they assign.

The roles available for an individual have always been of two kinds: those which are standardized by cultural tradition and those which are the result of bureaucratic organization. Innovation at all times meant a relative increase of the latter, rationally created roles. No doubt, engineered roles could be recovered by cultural tradition. No doubt a neat distinction between the two kinds of roles is difficult to make. But on the whole, the sick-role tended until recently to be of the traditional kind. In the last century, however, what Foucault has called the new clinical vision has changed the proportions. The physician has increasingly abandoned his role as moralist and assumed that of enlightened scientific entrepreneur. To exonerate the sick from accountability for their illness has become a predominant task, and new scientific categories of disease have been shaped for the purpose. Medical school and clinic provide the doctor with the atmosphere in which disease, in his eyes, may become a task for biological or social technique; his patients still carry their religious and cosmic interpretations into the ward, much as the laymen once carried their secular concerns into church for Sunday service. But the sick-role described by Parsons fits modern society only as long as doctors act as if treatment were usually effective and while the general public is willing to share their rosy view. The mid-twentieth-century sick-role has become inadequate for describing what happens in a medical system that claims authority over people who are not yet ill, people who cannot reasonably expect to get well, and those for whom doctors have no more effective treatment than that which could be offered by their uncles or aunts. Expert selection of a few for institutional pampering was a way to use medicine for the purpose of stabilizing an industrial society: it entailed the easily regulated entitlement of the abnormal to abnormal levels of public funds. Kept within limits, during the early twentieth century the pampering of deviants “strengthened” the cohesion of industrial society. But after a critical point social control exercised through the diagnosis of unlimited needs destroyed its own base. Until proved healthy, the citizen is now presumed to be sick. In a triumphantly therapeutic society, everybody can make himself into a therapist and someone else into his client.

The role of the doctor has now become blurred. The health professions have come to combine clinical service, public-health engineering, and scientific medicine. The doctor deals with clients who are simultaneously cast in several roles during every contact they have with the health establishment. They are turned into patients whom medicine tests and repairs, into administered citizens whose healthy behavior a medical bureaucracy guides, and into guinea pigs on whom medical science constantly experiments. The Aesculapian power of conferring the sick-role has been dissolved by the pretensions of delivering totalitarian health care. Health has ceased to be a native endowment each human being is presumed to possess until proven ill, and has become an ever-receding goal to which one is entitled by virtue of social justice.

The emergence of a conglomerate health profession has rendered the patient role infinitely elastic. The doctor's certification of the sick has been replaced by the bureaucratic presumption of the health manager who arranges people according to degrees and categories of therapeutic need, and medical authority now extends to supervised health care, early detection, preventive therapies, and increasingly, treatment of the incurable. Previously modern medicine controlled only a limited market; now this market has lost all boundaries. Unsick people have come to depend on professional care for the sake of their future health. The result is a morbid society that demands universal medicalization and a medical establishment that certifies universal morbidity.

In a morbid society the belief prevails that defined and diagnosed ill-health is infinitely preferable to any other form of negative label or to no label at all. It is better than criminal or political deviance, better than laziness, better than self- chosen absence from work. More and more people subconsciously know that they are sick and tired of their jobs and of their leisure passivities, but they want to hear the lie that physical illness relieves them of social and political responsibilities. They want their doctor to act as lawyer and priest. As a lawyer, the doctor exempts the patient from his normal duties and enables him to cash in on the insurance fund he was forced to build. As a priest, he becomes the patient's accomplice in creating the myth that he is an innocent victim of biological mechanisms rather than a lazy, greedy, or envious deserter of a social struggle for control over the tools of production. Social life becomes a giving and receiving of therapy: medical, psychiatric, pedagogic, or geriatric. Claiming access to treatment becomes a political duty, and medical certification a powerful device for social control.

With the development of the therapeutic service sector of the economy, an increasing proportion of all people come to be perceived as deviating from some desirable norm, and therefore as clients who can now either be submitted to therapy to bring them closer to the established standard of health or concentrated into some special environment built to cater to their deviance. Basaglia points out that in the first historical stage of this process, the diseased are exempted from production. At the next stage of industrial expansion, a majority come to be defined as deviant and in need of therapy. When this happens, the distance between the sick and the healthy is again reduced. In advanced industrial societies the sick are once more recognized as possessing a certain level of productivity which would have been denied them at an earlier stage of industrialization. Now that everybody tends to be a patient in some respect, wage labor acquires therapeutic characteristics. Lifelong health education, counseling, testing, and maintenance are built right into factory and office routine. Therapeutic dependencies permeate and color productive relations. Homo sapiens, who awoke to myth in a tribe and grew into politics as a citizen, is now trained as a lifelong inmate of an industrial world. The medicalization of industrial society brings its imperialistic character to ultimate fruition.



Cultural Iatrogenesis




 Nemesis: Contents

Online Reading

Bob’s Best Books