Monday, July 25, 2005

The Welfare-State Squeeze

Japan says it is planning to send astronauts to the moon by 2025. The mission is apparently part of a larger plan to extract minerals from space. At first glance, this seems like a marriage made in heaven: a natural resource-starved nation with a tremendous technological base seems like an obvious candidate to go to another world to get them. Admittedly, it is a formidable engineering task, but so were telephones that can take pictures. Over time private productivity improves owing to human ingenuity, and the Japanese have certainly done more extraordinary things than this in the last 150 years.

But it will never happen. And the reason is not a shortage of imagination. Rather, it is that Japan, like most modern nations, is being slowly strangled by the increasing burden of the welfare state. In the sense I mean it here, this trend has drawn little comment. But the rising burden of state pensions and health care in aging populations is going to result not just in rapidly growing tax burdens, but in a sharp narrowing of the political playing field. Options for spending money that would have been eminently debatable in the flush decades of yore are simply going to be fiscally impossible because of rapidly growing entitlement spending. This spending is going to crowd out not just private-sector economic activity but opportunities for public spending of all kind. Below are OECD data on social spending as a percentage of GDP for several countries:

Country198019902001
Australia11.314.218
Canada14.318.617.8
France21.126.628.8
Germany2322.827.4
Italy18.423.324.4
Japan10.211.216.9
Sweden28.830.828.9
UK17.919.521.8
US13.313.414.8



Japan is simply not going to be spending public monies going to the moon in light of ballooning expenses for promises made when promising was easy to populations now aging much more rapidly than once thought possible. This is a pattern that is repeating itself throughout the industrialized world. Social-welfare spending will increasingly crowd out research, education, law enforcement, and all the other contemporary functions in which governments engage.

The case of the U.S. is particularly interesting. While as a percentage of GDP social spending has been relatively stable, as a percentage of the federal budget it has not. According to the CBO, such spending was 31 percent of the budget in 1970, 44 percent in 1980, 45 percent in 1990, 53 percent in 2000 and 54 percent in 2004. It is common to hear that the inability to fund this or that federal program is due to inadequate taxation of the rich. However, the reason that social welfare spending as a percentage of all that Americans produce has not risen as it has elsewhere is because apart from wartime, the American people simply will not tolerate federal taxation much in excess of twenty percent for an extended period. (It was 20.9 percent in 2000, and after the Bush tax cuts is 16.3 percent in 2004, according to CBO data.) Taxes can rise and fall around the twenty-percent threshold, but entitlement spending grows without relent unless it is periodically restrained by the discipline of lower taxes. People thought that the late Sen. Daniel Patrick Moynihan was nuts when he said that the motivation for the Reagan tax cuts was to starve the federal treasury. But with twenty years’ hindsight that seems to be true, and we are fortunate that it was.

More and more treasured (regardless of their effectiveness) programs are going to come under siege, and the culprit will not be inadequate taxation of the rich but increasing crowding out by Social Security, Medicare, Medicaid and other smaller “entitlement” programs. And the word “entitlement” is unfortunate; no spending is an entitlement in the sense of deriving from natural law or even the Constitution, but is instead an “entitlement” because previous statutes provided for an automatic formula for increasing spending as years go by. But statutes can be repealed as easily as they can be enacted, and so with the 20-percent constraint completely binding the likely future is to be one of trimmed entitlements and reduced government activism in other areas.

Europe and Japan are, unfortunately, another story. In principle it is possible to lower the social-spending burden. Sweden did just that when such spending peaked at an astonishing 36.8 percent of GDP in 1993. But those kinds of cuts are much harder now. In Europe and Japan fertility is dramatically lower, the elderly more and more dominate the population. To see how bad the demographics are, go to the Census Bureau’s global population projections website. Compare the U.S. to Germany, Italy or Japan. The "population pyramids" you will be shown for the latter are close to inverted pyramids. Silke Uebelmesser has written of the imminent "gerontracy," a state of affairs whereby the elderly become such a potent political force that decreasing pensions becomes politically impossible. She predicts that Italy will cross that bridge in 2006, Germany in 2012 and France in 2014. Population aging there means a future of diminished expectations and intergenerational bitterness.

Monday, July 18, 2005

Why Does the Academy Tilt Left?

Among the most common (and largely true) complaints that the general public has about university faculty is that it doesn't look like America - not for the usual reasons of race, sex, etc., but because of political views. I work in a university, and I find that a disappointingly large proportion of university professors go to astonishing lengths to refute what is to many people painfully obvious. Is academia leftist? To rephrase Daniel Okrent’s (the former New York Times “public editor”) famous characterization of his newspaper as socially liberal, “of course it is.”

Evidence is not hard to find. People routinely publish survey data indicating that hugely disproportionate numbers of university professors are registered Democrats, or have beliefs that put them well out at the left tail of the American political distribution. For those who prefer to look into it themselves, it is an interesting exercise to go to the campaign-donation website of the Campaign for Responsive Politics. Among other things they allow you to call up campaign donations (of a certain minimum size) by occupation. Go to their site for looking up individual donors. Type “professor” under occupation and “2004” under “election cycle.” The first two pages of donors, which were all I bothered to look at, come back overwhelmingly lefty. Of 100 contributions, 91 were to Democratic candidates or leftist political groups, which is lopsidedly disproportionate. The UCLA Higher Education Research Institute reports that as the country has moved to the right, faculty has actually become somewhat more leftist. 48 percent of faculty surveyed described themselves as “liberal” or "far left" in 2001, versus only 42 percent in 1989. (The number of women faculty who so describe themselves has grown even more.) Eighteen percent are “conservative” or “far right.” Research by Daniel B. Klein and Andrew Western finds that at Berkeley registered Democrats outnumber Republicans by 9.9:1, and at Stanford the ratio is 7.6:1. A core principle of the scientific method is replication, and such findings of bias appear to be common across numerous experimental techniques. Contrary findings of no bias or rightward bias do not exist, to my knowledge. So let us accept the hypothesis as true.

Why is it so? Robert Brandon, chair of the philosophy department at Duke, notoriously claimed that academics are smart, and so the absence of conservatives in academia may indicate a lack of smart folks on the right. But that won’t do. Doctors are smart too, and going to the CRP donor-search site above and entering “doctor” yields a donation pattern almost identical to the population at large. In my scanning of the first two pages, I found 23 donations to the left, 25 to the right and two to nonideological doctors' lobbying groups.

In economic language, the explanation that naturally suggests itself is that academia is a cartel, a group where producers who should compete (universities, here) collude to fix prices or quantities. Certainly the opportunity presents itself. Like any hiring process, faculty hiring requires that the candidate be satisfactory to his employer. But the faculty process is different in at least three ways. First, the hiring decision often requires near-unanimous approval, so that a small minority can shoot down any hire they perceive to be unsatisfactory. Once entrenched, bias can then be difficult to dislodge. In a private firm, in contrast, one manager might make the hiring decision. If he consistently hires poorly his division performs poorly, and he may be fired. Second, posted academic job requirements are often extremely vague, limited to descriptions of the desired field qualifications (and often even those are not included). Apart from those research interests, unhelpful boilerplate such as “potential for major scholarly achievement” or “tremendous potential as a teacher” is often included. This means that within hiring corridors any criteria can be applied. Third, the tenure process itself provides a breakwater for purging undesirable candidates who make it through the screening at the hiring stage. While defenders of tenure (justifiably) note that it promotes freedom of thought for those who receive it, they fail to note that it can also serve to purge undesirable ideas, and make ideologically protectionist universities that might otherwise be vulnerable to competition stronger.

Despite the claims of some that the competition to hire the best scholars mitigates against an ideological cartel, the university market is such that those competitive pressures may not be as strong as those, say, compelling software firms to hire the best programmers. This is all the more true in fields where empirical claims are not easily subject to empirical rejection. (Or in fields where empirical verification of claims about the world is not even an issue.) In such fields false (or at least statistically unusual) knowledge can persist longer in the marketplace of ideas. Stanley Rothman and others find that the greatest leftward bias is found, in descending order, in English literature, performing arts, psychology and fine arts. Three of these fields are in the humanities, and generally do not rely on experimentation to weed out weaker ideas. If the competition metaphor is misplaced, then a cartel could easily persist. Following techniques commonly used in looking for discrimination in labor markets generally, Rothman et al. also find unexplained employment differences even after accounting for professional accomplishments. Note that the usual social-science cautions about a hypothesis whose testing is in its early stages apply.

Another explanation, not necessarily contradictory of the cartel theory, is self-selection. The pool of potential candidates in the academy is disproportionately leftist, this story goes, and so the hiring decisions almost inevitably are as well. (This explanation would be intolerable in a racial-discrimination litigation setting, but let that pass.) Undoubtedly there is some truth to this, as anyone who spends time as a graduate student in a reputable Ph.D. program outside the sciences can attest. But that is no answer either. While the academic life has certain features that might plausibly matter more to the thoughtful lefty than to the hard-charging righty – long vacations, flexible work hours, substantial freedom in one’s work assignments (research topics, classes taught), substantial opportunities for extended foreign travel – it is also possible that potential academics rationally anticipate that leftists dominate the hiring and tenuring process, and so their prospects in academic work will be limited. (This may explain why the right appears to have an advantage in a different intellectual market – the public intellectuals who work in think tanks not just in Washington but throughout the country.) As far as I know, there have been no attempts made to discriminate between the self-selection and cartel hypotheses. But neither is particularly friendly to the idea that competition in the academic marketplace for ideas is so robust that the ideas that professors advocate are likely to be the best ones.

As a final note, in the aforementioned article from Duke, the history chair John Thompson notes that “[t]he interesting thing about the United States is that the political spectrum is very narrow,” comparing it unfavorably to Canada, where there is more support in the public at large for a big welfare state. There are two problems with this argument. First, the welfare state is probably the only sense in which European and Canadian opinion is broader than in the U.S. On many matters debate is far more robust, and the spread of politically permissible views far wider, in the U.S. generally but, conspicuously, not on American campuses. Where in European political-party platforms are campaigns for pro-life causes, gun ownership, traditional marriage and so on? Advocates of single-payer health care, slavery reparations, socialism and the like are not difficult to find on campus (even though, in the case of socialism, one needs an electron microscope to find them in the population at large), but the views that animate the American right are far rarer on campus than in society at large. Finally, what does it say about the American academy and American society at large that the standard of comparison is not the U.S. but other countries? The standard of comparison is not to Europe, or some idealized distribution of true ideas, but to the actual distribution of American views. Making that comparison yields decisive evidence in favor of the proposition that we faculty are simply not like everyone else. Is that a problem? That is a question for another day.

Thursday, July 14, 2005

The Economics of the Jihad II - The Demand for Jihad

Earlier I claimed that Europe might prove to be an unusually fruitful front for recruiting labor for the jihad firm. Subsequently we have learned that the London transport bombers appear to be all home-grown. The idea of the jihad as a product being sold in various markets, like the idea of the jihad as a firm, is illuminating up to a point. In particular, the demand for jihad (or, alternatively, the supply of jihadis) may be considerably more robust in Europe than in the Middle East over the long term.

Economic theory supposes that people make choices by rationally ordering their options and choosing the best one. Why would someone choose to support the jihad, or even to participate in it? Clearly, by the individual’s own measure, the jihad offers something more than opposition to or indifferent acceptance of it. Note that, contrary to older economic thinking, this decision is for most not a matter of money. (Although money may loom large for the middle and high-level managers in the jihad firm who have an opportunity to control large amounts of money being moved around the globe.) Many suicide bombers, other than in the West Bank and Gaza, appear in terms of very casual empiricism to be middle-class and educated. One supposes then that the rest of the jihad workforce is similarly comfortable. So the question is for most not the pursuit of income. Rather, does the jihad offer something, emotionally or otherwise, that its rejection does not?

This is a function of available information, both about life under the sharia and its alternatives. The somewhat counterintuitive implication of this model is that the jihad sells better in Europe, especially Old Europe. Many in the Middle East – Turks, Arabs and Iranians – have personal experience either with life under Islamist rule or with Islamist terror. They know that Islamism turns out badly in practice.

Once they take power, the Islamists must fix potholes like everyone else. Part of the appeal of Hezbollah in Lebanon is often said to be their ability to provide public services through their preexisting infrastructure. But Iranians have found after a quarter-century of Islamist rule that ayatollahs can be as corrupt as anyone. Hashemi Rafsanjani is said to have become quite wealthy over his time as a senior member of the Iranian ruling class, even as the Iranian economy has languished. (According to World Bank data, between 1980 and 2001 the growth of per capita income in Iran was only 0.8 percent per year, although recent increases in oil prices have improved that performance.) In Algeria Islamism almost tore that society apart, with hundreds of thousands of deaths in a brutal civil war. If the army of the car bombs and televised beheadings in Iraq were to run candidates, how many votes would they get nationwide? At the end of the day the appeal of the fanatic sharia society has limits in the Islamic lands.

Of course, this is not true to the same degree everywhere, but even the exceptions prove the rule. The jihadi ideology apparently flourishes in Saudi Arabia and portions of Pakistan. (But in Pakistan, and presumably elsewhere, we must be careful. The economist Tahir Andrabi’s research indicates that the press has dramatically exaggerated the appeal of madrassas for Pakistani parents. Rather than educating up to a third of Pakistani children, as is sometimes claimed, his analysis of actual Pakistani census data indicates that the correct figure is less than one percent. John R. Bradley’s new book Saudi Arabia Exposed argues, based on his experience talking to Saudis away from Riyadh and distinct from the highly polished, telegenic, Westernized Saudi elite, that the fanaticism we associate with that society is actually only shared by a small minority of that society, and is confined primarily to one ethnic group.) Even conceding the greater appeal of the jihadi ideology in certain countries, they are largely lands where political opposition is largely channeled into Islamism because nothing else will be tolerated. The long-term growth potential of the jihad in the Middle East may be limited, and to the extent it exists it will be disproportionately in the same failed states. Middle Easterners have by and large seen the jihad close up, and increasingly do not like what they see.

Europe is another matter. Muslims in Britain and France are unemployed at higher rates than non-Muslims, but the difference is not unlike the difference between blacks on the one hand and whites and Asians on the other in the U.S. (Arabs in the U.S., many of whom are admittedly Christian, have significantly higher incomes and education levels than Americans overall.) But black Americans are Americans in ways that some Muslim Europeans either are not or feel that they are not Europeans, and the perception of outsider status is far more important in the demand for jihad than the reality. The aggressive European embrace of multiculturalism implicitly tells minority groups that they are always different. It is a natural human tendency, given that one is permanently to be seen as the other, to wish to be on the top rather than the bottom, all the more so when one is treated, as the multicultural model demands, like a zoo animal – fun to look at, well-fed, but never able to escape the cage. Theodore Dalrymple has written hauntingly of the way the residents of the Muslim ghettoes of Paris contemptuously view their perceived outsider status, a sentiment multiplied by the suffocating generosity of the French welfare state:
Benevolence inflames the anger of the young men of the cités as much as repression, because their rage is inseparable from their being. Ambulance men who take away a young man injured in an incident routinely find themselves surrounded by the man’s “friends,” and jostled, jeered at, and threatened: behavior that, according to one doctor I met, continues right into the hospital, even as the friends demand that their associate should be treated at once, before others.

Of course, they also expect him to be treated as well as anyone else, and in this expectation they reveal the bad faith, or at least ambivalence, of their stance toward the society around them. They are certainly not poor, at least by the standards of all previously existing societies: they are not hungry; they have cell phones, cars, and many other appurtenances of modernity; they are dressed fashionably—according to their own fashion—with a uniform disdain of bourgeois propriety and with gold chains round their necks. They believe they have rights, and they know they will receive medical treatment, however they behave. They enjoy a far higher standard of living (or consumption) than they would in the countries of their parents’ or grandparents’ origin, even if they labored there 14 hours a day to the maximum of their capacity.

But this is not a cause of gratitude—on the contrary: they feel it as an insult or a wound, even as they take it for granted as their due. But like all human beings, they want the respect and approval of others, even—or rather especially—of the people who carelessly toss them the crumbs of Western prosperity. Emasculating dependence is never a happy state, and no dependence is more absolute, more total, than that of most of the inhabitants of the cités. They therefore come to believe in the malevolence of those who maintain them in their limbo: and they want to keep alive the belief in this perfect malevolence, for it gives meaning—the only possible meaning—to their stunted lives. It is better to be opposed by an enemy than to be adrift in meaninglessness, for the simulacrum of an enemy lends purpose to actions whose nihilism would otherwise be self-evident.

When there is a belief that society is not built for you, the demand for an alternative, no matter how it absurd it appears to outsiders or to those closest to you, cannot help but grow. And the combination of economic stagnation, excessive devotion to multiculturalism in lieu of assimilation, the ease of market entry for jihad entrepreneurs owing to free-speech traditions, and the possibility (due to the expansive welfare state) of living a comfortable live without acquiring the dignity and self-respect that comes from having to make one’s own way in the world, means that many Western European nations will become increasingly expert at producing young Muslim men who are enthusiastic buyers in the market for jihad. In that sense it is perhaps not surprising that, as Toronto’s Globe and Mail reports, the London bombers were apparently recruited at a government youth center.

Monday, July 11, 2005

The Economics of the Jihad I - The Jihad as a Firm

The world jihadi movement bears some useful resemblance to a business. Like a business firm, it sells products and faces market constraints. The insights of the theory of business organization can thus be usefully applied. Like all abstraction, it is of course an approach subject to diminishing returns.

The idea of Al Qaeda and its affiliates selling a product undoubtedly seems forced if not ghoulish. But in fact, reduced to its essentials, the jihad does consist of a group of individuals, with a particular organizational structure, trying to sell two products. One is sold to citizens and leaders of Western countries. They must be persuaded to adopt certain policies. The most obvious at the moment is the withdrawal of military forces from the entire Muslim world. At some future point, perhaps, demands related to Muslims in Western countries or even the adoption of the sharia might be “sold” via the techniques of repeated attacks on military, commercial, and purely civilian targets. The other product is the appeal of the jihad, and the target audience is the potentially susceptible subset of the world Muslim community and especially the young men within that group. To close that sale, presumably, more members of these communities must be persuaded to feel an ideological sympathy for the jihad or that the jihad is a movement on the rise. Indeed, the use of simultaneous attacks as a “trademark” is an important way for the Al Qaeda leadership to establish a brand identity, and for independent contractors below them to signal their sympathy.

To increase sales, the firm has divisions across the globe. Unlike, say, the marketing, production and accounting divisions of a large corporation, these divisions each independently engage in most of the firm's functions – they manufacture propaganda, they raise money, they put out “product” – in Iraq, London, the Philippines, Thailand, and elsewhere. The divisions appear to have little interaction with one another, and are only tenuously connected – perhaps only by inspiration rather than formal chain of command – with the “head office,” which consists of Bin Laden (if he still lives), Zawahiri, etc. Each division produces the product for a typical geographic area.

The economist Oliver Williamson has described this structure as an “M Corporation,” to distinguish it from the “U corporation,” whose divisions are not regions or products but separate functions of the unitary production of (perhaps several varieties of) a single product. There is in a U corporation one finance division, one accounting division, one marketing division, etc. In the jihad as a U corporation there would be separate divisions formed at the central level and charged with doing the propaganda, carrying out the attacks, and coordinating the relations among divisions. Instead, each of those tasks is done within each cell.

In addition to the divisional function, the jihad is highly decentralized. Like any hierarchical structure in nature (the polymer, the firm, the nation-state), the jihad has its own degree of ties across divisions and along vertical lines. The horizontal ties appear on very casual reading to be low – for example, the Iraqi division leader Zarqawi does not plan attacks in Germany. At least since 9/11 (where Bin Laden himself had to approve the operation) hierarchical ties now appear to be quite low as well. It appears that no one goes to the badlands of Pakistan or Afghanistan to receive consent for attacks such as those in London or Madrid.

Gordon Tullock, in Economic Hierarchies, Organization, and the Structure of Production, posits three salient reasons for a more hierarchical and cohesive structure. That the technology requires it (a pharaoh might need to have a master supervising dozens of lower-level supervisors, themselves in charge of dozens of people hauling huge bricks up a pyramid) is not an issue here. It presumably does not take many people to organize and coordinate an attack carried out with simple materials. Only the explosives may (or may not be) something that must be purchased interdivisionally. Hierarchies also increase the ability to monitor and punish opportunistic behavior – starting a competing jihadi group based on some different theology, skimming money off the financial transfers, and providing consistent and reliable information to “investors” – in this case, those who knowingly fund the jihad. Only this reason at this stage might argue for more hierarchy.

Given this, the highly decentralized structure should persist for some time. This structure also has one other benefits not so relevant to Tullock’s analysis of conventional firms. If a cell has a betrayer in its midst the damage up the hierarchy is very limited. Disrupting one cell does not disrupt the entire firm. The analogy would be the costs of GM losing several key people at the top of its marketing division versus Avon losing its chief marketers in Iowa, Florida and Venezuela. The former is certainly more costly. The independent divisional structure also lends itself to taking advantage of entrepreneurial creativity based on local knowledge. A cell in, say, Paris will have better knowledge of which attacks are likely to generate the greatest payoff both among the French and among potential jihadis, and how best to succeed in such an attack within the contours of French society. (Success includes evading the French intelligence services.)

But if the jihad grows, the independence and isolation is costly. If the propaganda campaign among young Muslim men (and Europe is perhaps the most likely candidate) is effective, so that the jihadi ranks grow, then presumably a more hierarchical organization must be established to prevent rival centers of power creating the above problems. If that happens such an organization becomes easier to decapitate, but also (prior to that time) able to engage in more complex activities. There has been much concern about the ability to engage in an attack involving weapons of mass destruction, particularly nuclear weapons. The acquisition of the resources (including stolen nuclear material) and the timing, nature and delivery of such an attack suggests the need for a more complex organizational structure. Until such time as it is rational for the aforementioned reasons to establish one, the decentralized structure will persevere, and hence the size of feasible attacks may be correspondingly limited.

Finally, the fact that each division is for now relatively independent suggests that division of labor within cells is a strength for the division but also a point of vulnerability. The propaganda specialists in particular might be unusually important links in the chain. Each propagandist is implicitly charged with recruiting enough men to make the cell productive. In that sense, allowing the most perversely charismatic recruiters (one thinks of the Finsbury Park mosque prior to its takeover by moderates, for example) to proceed unchecked is unusually costly. Recruiters bring in members, and those members proceed to make the divisions more powerful. If people are drawn beyond the optimal staffing, presumably, some of them can go on to establish other cells. The tolerance of hard-core jihadi sentiments, while the hisstorically correct thing to do in most Western societies, is then unusually costly.

Monday, July 04, 2005

Look at Me Now: The Cult of Self-Expression

What do Danish marriage patterns, Japanese T-shirts and that faded John Kerry yard sign which your neighbor refuses to take down have in common? According to the English-language Copenhagen Post, Dutch men are increasingly fond of marrying women for their names. Not, as people used to do, because the name is prominent – with the name really signifying the importance of the family. To marry someone of a particular name was to marry well. Rather, what is so desirable about the names of these women to these men is that they are unusual:

She [the somewhat pedestrianly named Susanne Christiansen, head of the marriage registrar’s office in Aalborg] along with her colleagues in Århus and Copenhagen, says men are especially willing to rid themselves of traditional Danish surnames like Hansen, Jensen, and Nielsen. The offices have no statistics to back up their claim, but all agree that the trend is on the rise.

‘But it depends entirely on the wife’s name,’ said Christian Nielsen, leader of the marriage registrar’s office in Copenhagen. ‘In contrast to women, men do not take the name Olsen out of love.’

Michael Lerche Nielsen, name researcher at the University of Copenhagen, has his own theory about their reasons.

‘Split family patterns mean that Danish men are not as fixed on their family name as they used to be,’ he said. ‘Instead they now pick the name that fits their own self-image.


Olsen, Hansen and Jensen are apparently to Denmark what Smith, Johnson and Williams are to the U.S. Surely it must be new in the long span of the human marital bond to seek out a spouse as a means of having a cool-sounding driver's license.

The public proclamation of how different one is is one of the signature features of our age. The American T-shirt culture is an excellent example of this peculiar phenomenon. While it has always been true that fashion has been designed to draw attention, the expressive T-shirt (or bumper sticker) is a way to proclaim to anyone who will look what one’s peculiar tastes and likes are. What exactly is the value of proclaiming to the anonymous strangers one runs into on the streets that one roots for a particular sports team, or has certain political views, or believes a certain string of words is funny? There is some sort of profound need in modern life to be recognized as distinct, even by people one will never see again. (I say this as someone with an array of shirts bought over my lifetime that is second to no one’s.)

And the T-shirt culture, and the cult of self-expression of which is emblematic, is hardly distinctly American anymore. It has spread throughout Europe and Australia. Japan, with its notoriously (to a native speaker) goofy English-language T-shirts, is an example of self-expression where the expression is not even understood by the self.

And T-shirts are hardly the only example. The trend toward odd names among American children is well-known. Roland G. Fryer, Jr. and Steven D. Levitt report, extraordinarily, in the Aug., 2004 Quarterly Journal of Economics that in the 1990s thirty percent of black girls born in California had unique names – names that none of the other millions of girls of all races in California born during that period shared. And it is hardly unique to black Americans; for white girls about one out of twenty was given a unique name. There is a powerful imperative that many modern parents have to make sure that their kid is the only Cabriolet on the school roll. Even some of the local opposition to chain stores and restaurants entering areas that currently have none is probably driven by a need to not just preserve but publicly assert one’s uniqueness. (As are many blogs, including this one.)

What does this say about us? Once upon a time, when religious faith was more widespread, life was often devoted mostly to doing what was necessary to spend the afterlife pleasantly. Now, seemingly, the most important thing a person can to is not to live well, to do good or to achieve but to be famous, which is completely different. And if you can’t be famous, you can at least be different.

Perhaps the most vivid, and important, example of this phenomenon is political correctness. The economist Stephen Morris has written that PC is a way to signal publicly that one is not like those other folks – racist, sexist and whatnot. By using preposterous formulations such as “undocumented immigrant” (which calls to mind someone whose papers blew off the side of the boat on the way over), the speaker is trying to establish his bona fides as someone who says and thinks the right things, and certainly isn't like those people. Such usage is all about the speaker and not the listener. Like the other manifestations of the look-at-me culture, it is an artifact of a time when how one is seen is considerably more important than what one does.