Monday, August 29, 2005

The Money Tree

Here is an interesting experiment to conduct. Go to the New York Times archive search page. Type in the string “government pays for.” As of the time I write this, from 1981 on there are 123 articles with that phrase, and there are 289 between 1851 and 1980. But “taxpayers pay for” shows up only 43 times after 1980, and only once before 1981.

The difference between those two phrases, and hence the implications of the difference of their frequency of use in America’s newspaper of record, is important. (I have replicated this disparity in Lexis/Nexis searches across many newspapers.) “Government pays for” is a phrase that suggests that there is a single, conscious, decision-making entity called “government” that can conjure up money at will to be devoted to this or that purpose. I call this unexamined premise the “money tree fallacy.”

It is a fallacy because of course there is no “government” in the sense I just used it. To suppose that “government” can spend money on this or that productive purpose is to commit what philosophers call a category error – the attribution of traits to an entity that it does not really possess, but some other type of entity with which it is being implicitly confused does. Governments do not spend money. Rather, individuals within the government make decisions, according to some set of institutional rules – voting rules, the bureaucratic discretion individuals employed by the government possess, etc. These decisions raise money from some individuals and transfer them to others. This is the fundamental truth of the matter, even if the laws that enable this transfer, and the people who debate them, speak in terms of groups – of subsidies to “farmers” or “college students” or “the poor,” of taxes on “the rich” or “small businesses,” etc.

In my judgment one of the most important habits I feel I must break in students, particularly in lower-division classes, is belief in the money-tree fallacy. “The government,” “big business” and “corporations” are in my experience the favorite targets of this type of thinking. In each case these abstractions are seen as large pots of idle cash to be tapped for some more valuable end. (The ease with which people resort to arguing that lower taxes on “the rich” are to blame for insufficient expenditure on government activities they value more than their fellow citizens, even if it is previous and increasingly burdensome government transfers that are most likely responsible, is another example.) To argue that “the government” should pay for something is to argue that your fellow citizens should, in proportion dictated by the often absurdly arbitrary tax code of the relevant jurisdiction (the federal tax code, for instance). To argue that “corporations” should is really to argue that certain individuals – shareholders, employees and customers – should, perhaps in unknown and certainly capricious proportions. It is to believe that higher consumer prices (and hence lower consumer standards of living), lower employee wages, fewer jobs and lower stock values are justifiable as a means of transferring value to the recipients of whatever government spending results. Is that worthwhile? Reasonable people can differ on that. But an informed citizenry making intelligent decisions in a consensually governed society requires seeing the problem for what it is. Government benefits flow to, and the cost to make them available is borne by, individuals with their own values, dreams and ambitions. There is no getting around it. There is no magic money tree.

Friday, August 19, 2005

Earth Last! Conservation as a Special Interest

What are we to make of the paper published in the top scientific journal Nature that advocates releasing wild animals from Africa on the Great Plains of America, because of their genetic similarity to animals that once roamed there but are long since extinct? The plan is striking in its audacity. But even more interesting is the unexamined goal of the authors. More and more scientists believe that the arrival of humans in North America coincided with the disappearance of most large mammals (think of the woolly mammoth or saber-tooth tiger), as has happened in many places over time. And African animals are apparently sufficiently similar that we can make things almost the way they were by carting these beasts across the Atlantic. While there is some formulaic rhetoric about preserving biodiversity, what the authors really believe is that their plan is a chance to undo the sin of Pleistocene humans in surviving.

If one is concerned about dangerous animals wreaking havoc throughout the previously tranquil Midwest, not to worry. Lead author and biologist Josh Donlan of Cornell University tells the BBC that "all of this will be science-driven." And if the hapless Dakotan proles are concerned about suffering the same fate as the occasional Californian who encounters a mountain lion, "There are going to have to be some major attitude shifts. That includes realizing predation is a natural role, and that people are going to have to take precautions."

Why on earth should we be trying to promote the re-etablishment of the distribution of genotypes that prevailed thousands of years ago? The ease with which Prof. Donlan proposes upsetting and even endangering the lives of people unfortunate enough to live in the way of his plans, and with which he assumes that the fact that a "scientist" is in charge makes everything alright, is an example of a disturbing trend in much of the environmental left. This belief has also colonized some portion of scientific ecology, and views the human presence on the earth, especially as we remake it to serve our ends, as one gigantic mistake. Restoring large tracts of the earth to their pre-human state is taken to be a self-justifying moral imperative. When people (none of whom, one supposes, are conservation biologists) have to disrupt their plans and goals to avoid being eaten by a newly introduced lion thousands of miles from where it belongs (or to sell their land to promote biologists’ and environmentalists’ peculiar romantic attachment to the primitive), they have only themselves to blame for daring to remake the world to begin with.

To the extent that this kind of thinking reflects a well-constructed moral code (and there are often reasons for doubting that) it is a peculiar one. It suggests not just that the pristine, pre-human earth is a moral good in its own right, worthy of respect, but that what humans do to it in the course of pursuing happiness is unworthy of respect. But this is exactly backward. What humans do to the earth can only be judged by the criterion of how it affects other humans. The earth is not a moral actor capable of choices worthy of our respect. We cannot "protect" or "defend" or live in "harmony" with the earth, which is simply a gigantic ball moving through space. What we can do is think about how what we do to the earth affects other humans. (Whether animals deserve a place in our moral calculus is a more complex philosophical question.) It does not make sense to talk about preserving the earth without attaching that goal to some utilitarian end. As an example of what not to do, here is the list of alleged wildlands crises by the Sierra Club, about as mainstream an environmental organization as one is likely to find:

What's Been Lost: A Snapshot
• More than 95 percent of America's old-growth forests are gone.
• More than half of America's National Forest lands (52 percent) have been exploited by the timber, oil and mining industries.
• More than 90 percent of our prairies have been plowed under or paved over, and more than 99 percent of the tallgrass prairie is gone.
• More than half (52 percent) of America's wetlands have been drained and developed and the nation continues to lose more than 100,000 acres of wetlands per year.

The number of threats to our wildlands has increased dramatically in the last 100 years: pollution, oil and gas drilling, development, suburban sprawl and off-road vehicles have added to the damage done by logging, mining and overgrazing. Wild America is under siege.


It is not clear what is so special about “Wild America,” or why an “old growth forest” is inferior to plain old forests. It is often said that, primarily due to rising agricultural productivity, there is more total forest cover in the U.S. now than in 1900. (I have been unable to verify this factoid.) Some people like to look at old growth forest, and some people are happy from a great distance even knowing it is there. Some people couldn’t care less, and some people attach great value to the paper produced from tree farms. One of them, perhaps, will use it to sketch a great novel or scientific paper. The "exploitation" of national forests is done in the course of providing real humans, American and otherwise, the things they value as they navigate their way through life. So too with paved roads and the buildings constructed on “wetlands.” (The use of the word “wetlands” is itself a polemical masterstroke, as a “wetland” cries out for the human-free touch much more than a “swamp” or a “marsh,” which the Oxford English Dictionary lists as synonyms.)

What all of the Sierra Club’s objections have in common is that they are really narrow preferences for using land in particular ways. Some of the fellow citizens of Sierra Club members share these preferences, some do not. So how to resolve this disagreement? One is the way we resolve most conflicts over resource use in society, through the exercise of property rights. There is nothing about any of these concerns that is different from concern about whether an acre should be used for wheat or corn, or whether another restaurant or gas station is the best use of a particular piece of land in a particular neighborhood. We routinely let property rights sort out these conflicts, and whether land should be used to display some (very possibly imaginary) pre-human idyll or as a ski resort is exactly the same sort of problem. Calling it a conflict of private interests is not as compelling as calling it “preservation,” but that is what it is. Even the argument that the pre-human landscape is being preserved for future generations, who have no say in the matter if it is sold off to some corporate farm, is not as compelling as it first seems. We leave it to mine owners to decide the rate at which the minerals will be extracted, secure in the knowledge that prices will convey the correct information about its scarcity, and we routinely allow easements to be placed on land to prevent future development. And biodiversity preservation is also a task often entrusted to the market, via game preserves, private company seed banks, etc. It is no more rational to bypass the market to preserve pre-human land than it is to use government force to require that such land be converted to shopping malls.

Environmentalist proposals of all types, especially those to conserve species or land, should be judged strictly by a human-centered morality. Implicitly, that is how they are being judged anyway, with the inability of the environmentalist to recognize his own interest as parochial being the primary stumbling block to seeing that. The environmentalist should be made to explicitly indicate how his proposals will make some set of humans better off, and whether that gain is worth the costs selectively imposed on other humans. Some proposals, such as many involving pollution reduction, will clearly pass such a test (although the precise nature of restrictions to be imposed are still up for debate). But others will not. The clarity that comes from seeing environmentalism and conservation as just another special interest involving scarce resources will go a long way in helping us understand which of their special interests merit state action. The earth is a means to an end, not an end in itself.

Monday, August 15, 2005

Immigration, Then and Now

The U.S. is in the midst of a wave of immigration that has not been seen since the early part of the 20th century. Comparisons to the earlier great wave which lasted from roughly 1850 until the Immigration Act of 1924 gave the country a prolonged immigration time out (until 1965, when quotas were substantially loosened) are instructive. Below, gleaned from a Bureau of the Census Report on the 2000 census, are historical figures on the percentage of the U.S. population that is foreign-born:

YearPercent
18509.7
187014.4
189014.8
191013.6
193011.6
19506.9
19704.7
19907.9
200010.4



The foreign-born population is thus neither as high nor is it growing as rapidly as in the past. But now as then it is heavily concentrated. For all the talk about the difficulty of educating Spanish-speaking children in the meat-packing towns of Iowa, the foreign-born population is overwhelmingly in a few places. Five metropolitan areas with only about 20 percent of the U.S. population – New York, Los Angeles, the Bay Area, Miami and Chicago – have 49.8 percent of the foreign-born population. As this is (other than Washington DC, with almost the national average of its population foreign-born) where the movers and shakers of American opinion congregate, our views of immigration will rightly or wrongly be determined by what people in these places think of it. In New York the sentiment is overwhelmingly positive, in California there is significantly more resentment; Texas probably lies somewhere in the middle.

What do we know about the foreign-born population? They are older than the native-born, but have a far higher proportion of working age (fewer children, fewer elderly). They have bigger households (3.72 people vs. 3.1), they graduate from high school at a somewhat lower rate, their households earn about $5000 less than the native-born, and their occupational distribution tends toward the less-skilled. And there are geographic differences among the foreign-born, with those from Latin America (about half the foreign-born total) tending to measure lower with respect to education and income than the native-born, those from Asia (a quarter of the total) about the same and those from Europe and Canada scoring higher.

But the story of the low-skilled immigrant is as old as the country itself. The interesting question is whether people who come here poor stay poor and bequeath poverty to their descendants. One report by a group that favors lower immigration finds that foreign-born Hispanics (the group, rightly or wrongly, of most concern) have some difficulty converging to the middle class, but their children do much better. There is little reason to think that this is much different from the way it was when Jews, Italians and Irish were the Mexicans, Haitians and Indians of their day. If the country’s assimilation machinery is the same in 2005 as it was in 1905 then immigration, based on historical precedent, is largely a non-problem.

But the country is in fact quite different. The two most important differences that rudely impose themselves onto the immigration controversy are the rise in the welfare state and of the multicultural ideology. It is said that up through the 1930s people could pass across the U.S.-Mexico border freely to work without incident. It is probably no coincidence that the establishment of the modern requirement of an implied work permit (in the form of citizenship or a green card) coincides with the establishment of the proto-welfare state at the federal level in the form of Social Security. The more there is an extensive web of public services provided, by necessity of circumstance if not by law, to all comers, the more tribalist rejection of the outsider will naturally rise to the fore. Some resentment of “those people” is inevitable in any multi-tribal society, but it can easily be exacerbated if one feels that “they” are drawing on scarce public services that is supposed to be for “you.”

In addition, the ideology of assimilation, which came to us as naturally as drinking water in an earlier era, has arguably been supplanted by a movement to relish all differences to the point of preserving them even when those who are different are inclined to be less so. One thinks of Pat Stryker, a Colorado billionaire who had a child in a public bilingual school and gave a significant amount of money to defeat an initiative in that state that would have outlawed bilingual education. There is little evidence that poor Spanish-speaking households whom bilingual education is supposed to benefit are big supporters of it. Rather, the primary support comes from cultural protectionists who wish the state to subsidize the preservation of their cultural heritage. (It is not difficult to find tales of students with Spanish last names or who are immigrants from Spanish-speaking countries yet fluent in English being funneled into bilingual education in order to pump up enrollment.) There is no reason to object to those who wish to promote a cultural heritage, as long as they do it on their own time. But when it is promoted as a boutique preference or to replace things parents who care about it should be doing themselves it enhances tribal tension. And bilingual education is representative of broader hostility to the public good of a common heritage which, if it is adequately invested in, well-trained citizens can draw on as they interact with their fellow citizens of different backgrounds.

The ultimate fear is that multiculturalism creates citizens unable or unwilling to transact across tribal lines. Inter-tribal trade is enhanced by a common language, which functions much as common currency does for the 50 states. I have found little data on the extent of retreat into tribalism, but another Census report finds that the percentage of “linguistically isolated households,” those where no one over 14 speaks English “very well,” has risen by a rather distressing 35.3 percent between 1990 and 2000. (The question was not asked prior to that, and the total number of households is still relatively low despite the huge rise, at 11.9 million.) To make it concrete, someone who speaks only Spanish will have a considerably more difficult time making it than someone who speaks English. (Someone who speaks both obviously has the greatest advantage, because he can easily transact in both linguistic currencies.)

In sheer economic terms, a multitribal society offers many benefits. A sheer taste for diversity – better restaurants, more ability to master foreign languages – makes it an improvement for some, other things equal. (A taste for uniformity, which is a fancy way of saying prejudice, is also plausible, but such a rationale would never be morally, economically or legally sufficient to justify restricting immigration.) Immigrants have been shown to create trading networks with their ancestral lands, improving the productive capacity of the U.S. economy. There are solid theoretical reasons to think that immigrants are self-selected for risk-taking, hard work and creativity, and a country full of such people will be not just a productive but a more interesting place to live. But most of those benefits depend on the ability to trade across tribal lines, and to the extent that government policy reinforces tribal differences we end up with a permanently sullen, tribally resentful country resembling a Belgium or a Quebec more than a country where people get along.

In his book One Nation, After All, the Boston College sociologist Alan Wolfe, based on interviews with about 200 suburban Americans, finds them generally devoid of racial prejudice and comfortable with immigration, but hostile to bilingual education. This is not some unenlightened nativist backlash, but simple recognition that people are less likely to transact fruitfully (and get along more generally) when they cannot communicate, either literally or culturally. The more public policy works in opposition to quick mainstreaming, the more resentful people will be of immigration seemingly promoted in part to accentuate such differences.

While still trailing the great immigration wave of prior years, what we are witnessing now is transforming the country. The objective differences between immigrants and natives with respect to religion, physical appearance (“race”) and the like are much greater now than in 1900, but subjectively probably much less. (Average native-born Americans are probably much more comfortable with their Indian-born doctor than their great-grandparents were with their Irish day laborers.) Given the improvements in modern transportation and communication technology, and the continuing gaps in opportunity between the U.S. and other (sometimes dysfunctional) immigrant-generating nations, mass immigration is a fact that it is impossible to undo short of police-state measures. Given that we have built it and they will come, the most urgent task is to provide an environment where people can get along, and current social trends make that about a 50/50 proposition.

Monday, August 08, 2005

India, China and Humanity

Brazilians apparently like to say of their own country that it is “the country of the future, and it always will be.” Much the same could be said of most of the developing world in the entire postcolonial period, particularly the two giants of India and China. But that of course is changing. Starting in roughly 1980 in China and perhaps with the financial crisis of 1991 in India, these two giants appear to have awakened. There has justifiably been much commentary about the effect of these changes, if they proceed all the way to full modernization, on geopolitics, struggles for natural resources such as oil, the U.S. economy, etc. But one potential spectacular side-effect of the development of these two countries is the general improvement in human welfare as over two billion people enter the modern freeway system of science and global commerce.

The economic historian Angus Maddison has built an impressive series of admittedly speculative estimates about economic growth going back almost 2000 years. Per capita GDP worldwide was stagnant from 0-1000 A.D., and grew 0.05 percent annually from 1000-1820. That increase is still microscopic – it amounts to a fifty percent total increase in the standard of living in over eight centuries, an achievement that in the most advanced countries takes only two or three decades now. In that latter interval there were already significant gaps between Western Europe, the initial outposts of what we now call the Anglosphere (Canada, the U.S., Australia, New Zealand) and Japan on the one hand (0.13 percent annually) and the rest of the world (0.03 percent). Since 1820 everything has been different, with the first group of nations leading the way between 1820-1998 at 1.67 percent annual growth and the latter group at 1.21 percent.

That growth has been achieved both through the creation of new scientific, technological and commercial knowledge and through systems of property rights and free trade in ideas that allow that knowledge to spread. When James Watt invented the steam engine, suddenly the power humans could bring to bear was no longer limited by their own physical strength or that of animals. Many things that were impossible suddenly became simple. But the benefits of Watt’s invention only temporarily accrued solely to people in his geographic vicinity – first Glasgow, then the U.K. Soon this knowledge, which could be understood and duplicated by anyone, was raising productivity and living standards worldwide.

Knowledge is a peculiar good in that it is what economists call nonrivalrous – my consumption of it doesn’t limit your ability to consume it at all. If I buy a computer there is one less computer for you. But if I know how to solve a differential equation that in no way leaves less knowledge for you. (The acquisition of the knowledge may be rivalrous, if for example, there are only so many seats in the math classroom, but possession and use of the knowledge is generally not.) When one person produces knowledge everyone can benefit.

But in most of the industrial era the heavy lifting of discovering new things – scientific things such as Boyle’s Law, technological things such as the invention of the transistor, and commercial things such as the superiority in some instances of just-in-time inventory – has been done by a small percentage of the population. According to Charles Murray’s inventory of human achievement, for most of the last several centuries it has been Europeans, with Americans beginning in the last century, who have created so much lasting achievement. (He looks at science and artistic achievements, and does not investigate commercial ones.) If one looks, for example, at Nobel Prize winners in science, the vast majority of them have worked in Europe or the U.S., with the U.S. dominating in the postwar period.

But that is changing. The firm Thomson Scientific publishes a data set called National Science Indicators. Sciencewatch has analyzed the most recent figures, and they indicate that the percentage of scientific papers published by scholars at U.S. universities has been declining at least since 1991, while that from EU nations probably peaked between 1998-2000. (There are no data available on absolute numbers of papers, and U.S. papers tend to have higher impact as measured by citations.)

The balance of course is made up by Asia, whose share between 1990 and 2004 has risen rapidly from 15.67 to 25.32% over that time. There is a natural tendency to think of this as a loss from the point of view of the U.S. – more science for them means less science for us. But “science” is not rivalrous. All we are seeing is a beneficial feedback cycle from greater prosperity in Asia to more scientific output. This progress will soon be if it is not already duplicated in technological and commercial knowledge. Collectively these things feed back into more economic growth, which feeds even more knowledge production. And that knowledge is available to everyone in societies where knowledge transmission is not restricted – by lack of education or by government or culturally imposed isolation. The important thing is not to worry about percentage shares of an output that is growing at astonishing rates. (Consider the number of patents or research papers generated annually; each figure grows significantly over time.) Rather, the key task for a nation is to put a society in a position to use whichever knowledge is generated. This is as much a question of entrepreneurial freedom, low taxes, etc. as of funds devoted to scientific research.

The rise of India and China is not just a rise but an entry into a traffic flow, one already occupied by the people of the wealthy nations of North America, Australia, Europe and East Asia. In those societies the ability of communications technology, property rights and free speech to expand our knowledge base has resulted in an extraordinary transformation since the Industrial Revolution. The entry of Japan shortly after World War II and Korea, Taiwan and others since about 1970 has accelerated knowledge production further. (Think just in recent months about the rapidly growing contributions of Korean scientists to cloning research.) The merging of billions of Indians and Chinese into this system has the potential not to diminish American horizons, but to expand them. The entry of so many people into the modern global/commercial/scientific system, with its emphasis on creating knowledge competitively and allowing it to travel freely, will dramatically increase the rate of human progress. Technological, medical and other advances will progress more dramatically. More knowledge from them is not less for us, but more for everyone in a position to use it. The cure for cancer may ultimately be found not in an NIH lab but in Bangalore. As long as people are free to offer the medicine here, why would that bother us? The era when Europe was the primary knowledge society gave us the railroad, steam power, the modern university, the corporation and vaccines. The addition of the U.S. has yielded the information revolution, aviation and an array of medical breakthroughs too numerous to properly appreciate. What China and India will add is utterly unpredictable in its particulars but will undoubtedly be spectacular in the aggregate.

Geopolitically, the rise of new powers always poses challenges (witness the rise of Germany and the U.S. in the late 19th century or Japan in the early 20th), and China's rise is no sure thing. But it is impossible and immoral in any event to try and forestall the transformation of nations like India and China emerging from centuries of penury. Knitting such nations into the global system would seem to be the more urgent task, the more so because it allows humanity to benefit from their people’s creative energies.

Monday, August 01, 2005

Globalization and Getting Along

Tribal conflict – religious, ethnic, linguistic – dominates the headlines. Predictions of the end of history, especially this aspect of it, have proven premature to date. The old divisions that have forever sundered humankind are with us still, having foiled the Marxist prediction of communism as the remaker of man as new man, free of tribal passions and jealousies, motivated only by the common good. Like most Marxist renditions of the future, this one collapsed in a heap of democide and economic failure, with unreconstructed man left to go on butchering the other in Yugoslavia, Chechnya and elsewhere.

So what about globalization? The idea of increased transnational commercial ties as a soother of trans-tribal conflict is not new. A rapid expansion of trade and foreign investment has been argued to knit nations together, and give them incentives to barter more and slaughter less. Unfortunately, the peak of these trends, and those predictions, occurred in an earlier wave of globalization which crested just prior to 1914, whereupon subsequent events falsified them spectacularly.

But that was a question of war between states, and I am more interested in the possibility that commerce promotes peace not just among nations but among different tribal groups, perhaps even within the same nation. Every nation has its tribal conflicts. The variety of ethnicities and religions in the U.S., Canada, the U.K. and France is already dizzying. But other European countries have their own at best partially assimilated minorities, India has its castes, religions and ethnicities, and China has a huge variety of tiny tribes (Tibetans, most famously) hard up against its 90%-plus Han majority.

Is tribal (meaning ethnic and religious) fratricide a permanent part of the human condition? Certainly there is reason to be pessimistic. Sociobiologists such as Jared Diamond have argued that we are genetically prone to violence, and it is not hard to marry that literature with the extensive literature from cognitive psychology purporting to show ingrained suspicion of different tribal groups to posit a permanent propensity for intertribal warfare.

But even sociobiologists do not generally sign on to complete genetic determinism. Rather, the principles of genetic biology set, via the evolution of the brain as outlined by cognitive psychology, the range of biologically feasible behavior. Anthropology and economics help us understand how the individual is guided within those parameters. This means that institutional structure is important – no institution, whether democracy, years of multicultural training, or a legal code emphasizing the rule of law is guaranteed to make us get along. But they can help.

And so too with commerce. Greater opportunities for exchange should, other things equal, make it more costly for people to engage in tribal conflict, because it will disrupt beneficial intertribal trade. There is an opposing natural economic tendency toward intratribal trade, because the marginal cost of such trade is often lower owing to previous investments in language and cultural capital. One reason it is easier for a Korean businessman in Los Angeles to look to other Koreans when he needs a loan or when he is looking to make a deal is not because of any innate clannishness that Koreans (or any other community) might have but because the ability to exchange information in one’s native language and against one’s cultural background is higher – more value can be created with less effort, other things equal, than when transacting across tribal lines.

But higher costs of intertribal trade have to be weighed against higher benefits. Networking with the larger global community means access to resources, microeconomic knowledge and opportunities that are not available when one trades only with one’s own. And so globalization – the systematic decline of barriers to trading across great distances, including cultural distance – can serve to lower conflict both across and within states.

On the other hand, globalization represents cultural free trade, in that people are exposed to cultural products (music videos, say) and practices (marrying without parental arrangement, e.g.) from all over the world. Those who benefit from the existing cultural order – those who own scarce cultural factors in cultural autarky – can be expected to (even violently) resist cultural competition. This is probably why intellectuals, especially those sheltered from competition, and clerics rank so highly among anti-globalization activists. These segments of society can be expected to punish individuals who consume other cultural products and practices, both individually and, when they control it, through the state. If a particular ethnic group dominates the government, we would be surprised if that group’s members would embrace freer economic and cultural engagement with the outside world. If a suppressed minority group demands it, intertribal conflict should follow. And not just for a moment. Ordinary economic protectionism lasts a long time – witness American sugar subsidies and textile tariffs – and cultural protectionism is probably no different.

But ultimately this latter effect is short-term, and the aforementioned gains to wider trading networks are long-term. Thus, the most likely outcome is short-term turbulence, including ethnic and religious conflict as a result of improved transportation and communications technology, followed ultimately by less (but certainly not zero!) tribal conflict owing to the further penetration of modern commerce into regions of the world hitherto sealed off from it.