Archive for March, 2009

“They can stuff their [food] credentials, ’cause it’s them that take the cash”

March 22, 2009

You know virtuous food (i.e., local/organic/sustainable or some other euphemism of the day) is doing well when Times articles about it move from the Fashion & Style section to Business. Saturday’s article chronicles the new momentum virtuous food – particularly organic food – has acquired thanks to a bit of White House boosterism. Andrew Martin writes:

Mr. Hirshberg and other sustainable-food activists are hoping that such actions are precursors to major changes in the way the federal government oversees the nation’s food supply and farms, changes that could significantly bolster demand for fresh, local and organic products. Already, they have offered plenty of ambitious ideas.

For instance, the celebrity chef Alice Waters recommends that the federal government triple its budget for school lunches to provide youngsters with healthier food. And the author Michael Pollan has called on President Obama to pursue a “reform of the entire food system” by focusing on a Pollan priority: diversified, regional food networks.

Still, some activists worry that their dreams of a less-processed American diet may soon collide with the realities of Washington and the financial gloom over much of the country. Even the Bush administration, reviled by many food activists, came to Washington intent on reforming farm subsidies, only to be slapped down by Congress.

The plot is familiar: intelligent foodies and farmers are trying to improve America’s diet with the help of a few Washington mavericks, only to be stymied by Congress and the evil agribusiness/food industry lobbyists which control it. Even the cast is familiar, at least on the pro-virtue side: Alice Waters. Michael Pollan. Mr. Hirshberg.

Wait–Mr. Hirshberg? That’s Gary Hirshberg, chief executive of Stonyfield Farm. Mr. Hirshberg, according to Martin’s report, is fired up about changing the system:

Back in Anaheim, Mr. Hirshberg, the head of Stonyfield Farm, said he, too, is optimistic that change is at hand. But he reminded the small crowd that the organic industry remains a “rounding error,” roughly 3 percent, of the overall food and beverage business.

“We’re at the starting line,” he says. “This is our job, our government. We’ve got to take it back.”

Do it, Mr. Hirshberg! Take back the government in the name of … multinational corporations. As Andrea Whitfill wrote last week in AlterNet, Stonyfield Farm is mostly owned by the Danone conglomerate, and Hirshberg happens to sit on the board of Dannon U.S.A. So is that talk about “taking back” the government in the name of virtuous food change from within, or clever marketing to sell Danone’s higher-priced yogurt?

In fact, Mr. Hirshberg’s Stonyfield Farm is not alone in pushing virtuous product while being owned by a distinctly non-virtuous company. It seems that just about every organic or virtuous brand consumers are familiar with has been snapped up by large corporations. Take virtuous cereal brands, for example.

“Cereals, like milk, are one of the primary entrance points for use of organics,” said Lara Christenson of Spins, a market research group for the natural products industry, “which is pretty closely tied to children — health concerns, keeping pesticides, especially antibiotics, out of the diets of children. These large firms wanted to get a foothold in the natural and organic marketplace. Because of the mind-set of consumers, branding of these products has to be very different than traditional cereals.”

These corporate connections are often kept quiet. “There is frequently a backlash when a big cereal package-goods company buys a natural or organic company,” Christenson said. “I don’t want to say it’s manipulative, but consumers are led to believe these brands are pure, natural or organic brands. It’s very purposely done.”

A little more digging shows that General Mills owns Cascadian Farm; Barbara’s Bakery is owned by Weetabix, the leading British cereal company, which is owned by a private investment firm in England; Mother’s makes it clear that it is owned by Quaker Oats (which is owned by PepsiCo); Health Valley and Arrowhead Mills are owned by Hain Celestial Group, a natural food company traded on the NASDAQ, with H.J. Heinz owning 16 percent of that company.

Whitfill has more examples, and Allison Kilkenny has pictures. Virtuous cereal, virtuous drinks, virtuous snacks, virtuous dairy products – much of what’s on the shelf at your grocery store is made by Big Food (-owned) companies far, far removed from what most of us envision when we think about how virtuous food is (should be) produced.

So what’s the big deal? Food activists have been telling us that the best way to get the food market to change is to vote with our money, spending more of it on smaller quantities of “good” food. Now we find out that the main channel through which we consume virtuous food – brand-name packaged foods – diverts our monetary votes to the coffers of the same companies we are trying to “punish.” That is not, in itself, the end of the world – or even of food activism. It may still be the case that virtuous food, in its most common forms, is better for us than other offerings from the parent Big Food firms. It may still be the case that corporate warriors such as Mr. Hirshberg will effect real “change from within” in the conglomerates they serve. It may still be the case that we can support virtuous causes by giving our money to one division of Unilever over another. Yet whatever may happen, it is definitely time for food activists to drop the “holy crusade” rhetoric in which organic/local/sustainable is the banner of the good, and “corporate” is the mark of the evil, and never the twain shall meet.

Advertisements

The bonuses next time

March 19, 2009

Sir Charles at Cogitamus wins Best Extension of the “Invisible Hand” Metaphor. He also has a good post on the implications of public anger at the AIG bonuses and their recipients:

For years we have been fed the bullshit notion that our economic system provided “pay for performance.” rewarding greatly those who most deserved it. When one was so gauche as to engage in “class warfare” and criticize the compensation of CEOs and Wall Street titans, and the growing gap between them and the average worker, we were lectured to by the Randroids and libertarians, the business press, and most of all, by Republicans, that this was simply the invisible hand briskly stroking the deserving organ of commerce.

The AIG situation stands as a wonderfully emblematic moment, a veritable tsunami washing away this illusion. It is but one of many instances in recent years where business elites have chosen to enrich themselves despite their all too verifiable failure. But it is one so stark, so brazen, so jaw-droppingly, gob-smackingly outrageous that it has created a public furor that could be transformative if used correctly. Coming as it does on the heels of Madoff and Stanford, Lehman and Bear Stearns, the stock market meltdown, the real estate bubble, the grotesque manipulation of exotic financial instruments by our financier-illusionist class, the public has simply had enough. They are afraid and angry, bitter and put-upon.

It would be nice if the bonuses were the thing that finally broke public support for the vast injustices and inequalities of the American economy, but I am not as optimistic as Sir Charles on this point. It is true that Americans are outraged about the bonuses, and that their outrage has even prompted the government to action. Nevertheless, as more huge bonuses to managers of failing organizations loom on the horizon, there seems to be little popular resentment of the idea of million-dollar corporate bonuses as such.

It is important to distinguish between anger at bonuses given out “undeservedly,” and anger at inflated corporate pay in general. The outrage over the AIG bonuses is likely a mixture of these two different sentiments. Some people are outraged because “the notion that the ‘masters of the universe’ class is in any way worth what they are paid or otherwise worthy of our esteeem and admiriation” has not yet been destroyed by economic realities. Others, I believe, are only upset at the bonuses because their recipients didn’t earn them this time. This latter contingent would not have cared one bit what executive pay was like if the economy were in (seemingly) good shape.

Sir Charles “want[s] Obama to take advantage of this moment and use it as a cudgel with which to achieve progressive economic ends.” Inasmuch as curbing inflated executive pay is central to American progressive hopes, it is essential that these “winter progressives” (those favoring redistribution only when times are bad) do not turn on the policies they support today because AIG, Fannie, or any other business is making good profits tomorrow.

Sometimes a risk factor is just a risk factor

March 18, 2009

Possibly the least controversial statement to have come out of the Fat Acceptance and Health at Every Size movements is the idea that obesity is not a death sentence – in other words, that not every fat person is one calorie away from heart failure, diabetes, and the many other diseases linked (often tenuously) to obesity. Now, mainstream medicine is starting to accept this. As Canada.com reports,

One of Canada’s top obesity doctors says it’s time to stop recommending weight loss for everyone who meets official criteria for obesity. Dr. Arya Sharma says being obese doesn’t necessarily doom people to poor health and that weight loss recommendations should be targeted at those most at risk because of medical problems.

Many people who meet the body mass index criteria for obesity “are really not that sick at all,” says Sharma, chairman for cardiovascular obesity research and management at the University of Alberta and scientific director of the Canadian Obesity Network. “It’s not unusual to find someone come into your practice whose BMI is 30 or 32 (technically obese). This might be someone who is physically active, who is eating a good healthy diet. If you followed the guidelines to the letter you would be prescribing obesity treatment when there’s really no reason to do that, because they’re not medically obese.” …

His appeal comes as evidence begins to mount that a significant proportion of fat people are metabolically healthy. One in every three people who are obese — and half of those who are overweight — may be resistant to fat-related abnormalities that increase their risk of cardiovascular disease, according to new research from Albert Einstein College of Medicine in New York. … In [that] study, nearly 17 per cent of obese men and women possessed not one of the heart or metabolic abnormalities the researchers considered.

On the one hand, this is fairly obvious stuff. Many fat people remain fat despite leading a healthy lifestyle; and many thin people remain thin despite doing everything “wrong” with their diet and/or exercise. There has never been a perfect correspondence between (over)weight and health, and it’s about time the public discourse on obesity acknowledged that basic fact.

On the other hand, it may be premature to dismiss the effects of obesity on populations’ health. In the Albert Einstein College study mentioned by the article, 83% of obese participants had at least one heart or metabolic “abnormality” that may have been linked to obesity. Now, this absolutely does not imply that these 83% were sick because they were fat, or that the sample is representative of any larger population. However, it does raise the question of whether fat people (not all of whom are in poor health) are disproportionately sicker than thin people.

Instead of a conclusive answer, I have some tangentially-related old data to share. In 1993, the CDC’s Behavioral Risk Factor Surveillance System survey asked a large, nationally-representative sample of American adults to report their general health, height, and weight, among many other things. This crosstabulation shows the relationship between respondents’ classification as obese (by their BMI) and respondents’ self-reported general health status.

Health Crosstab

A vast majority of obese (and non-obese) respondents reported their health as “good” or better. However, comparing the two BMI categories suggests a strong correlation between obesity and worse self-reported health. For instance, obese respondents were twice as likely as non-obese ones to report their health as “poor,” and half as likely to report their health as “excellent.” This relationship persisted in three-way crosstabs controlling for sex, race, education, and income.* While this analysis was carried out on unweighted cases, weighting the data set by a product of poststratification and design weights did not alter or weaken this relationship.**

The table raises as many questions as it answers. It appears true that in 1993, obese people were more likely to report being in poor health than non-obese people. However, one must ask:

  • Has this relationship persisted over time?
  • Does this relationship persist under different statistical methods?
  • To what extent does this relationship exist because obese respondents perceive their obesity as a health problem, independent of any diseases it may cause?
  • By extension, does this relationship persist when controlling for body image?
  • If the relationship is robust in various years, under various methods of analysis, and while controlling for body image, then what causes obese respondents to be more likely to self-report poor health?

    As this (overly) simple analysis suggests, the effect of obesity on the public health is not a closed case. While many people classified as obese lead healthy lives and suffer from no diseases, it remains to be seen whether the obese are still more disposed to be in poor health than the non-obese, and what (if any) maladies of the former are actually caused by their obesity.
    Read the rest of this entry »

  • First, they came for the economists…

    March 16, 2009

    Jim Manzi is one among many commentators on the economic crisis who is using it as an opportunity to question the discipline of economics:

    If Mankiw’s list is the best economics can do, it sure seems like a naked emperor moment to me. Where’s the beef?

    My challenge would be simple: please list 14 useful, non-obvious predictive rules that economics provides that have survived rigorous, replicated falsification trials.

    If you were to provide this challenge to physics or biology, it would be easy to come up with 1,400. Hence, human invention of aircraft, space travel, mobile phones, antibiotics, vaccines, MRI scans, the internal combustion engine and so forth. This – not the attempt to create pressure on public officials to support the policy preferences of most economics professors – is why actual science education is so important.

    Manzi is not alone. While public sentiment and political action are pulling in the direction of Keynesian interventionism, the discipline of economics is still rolling on the track laid down by the neoclassical gang. The Times’ Patricia Cohen writes:

    Prominent economics professors say their academic discipline isn’t shifting nearly as much as some people might think. Free market theory, mathematical models and hostility to government regulation still reign in most economics departments at colleges and universities around the country. True, some new approaches have been explored in recent years, particularly by behavioral economists who argue that human psychology is a crucial element in economic decision making. But the belief that people make rational economic decisions and the market automatically adjusts to respond to them still prevails.

    The failure of economics to respond credibly and quickly to the unfolding crisis has led some, Manzi among them, to criticize the field as a whole. In comment threads such as these, Joes of all trades have emerged to put in their two utils about how economics is wrong, useless, or “not real.” Some choice quotes:

    The sole determinant of everything in economics is human behavior, meaning that economics is a farce.

    Economists serve a useful role in society. They help fill an oversupply of endowed chairs at prestigious universities, they give “street cred” to the usury of bankers, and they are able alchemists for the aristocracy.

    Economics is not a science. It’s not even even a pseudo-science. It’s like trying to quantify lust or rage in the form of equations. Your rants about your fellow economists are of that of a shaman accusing animists of being ignorant.

    Most economists are in the propaganda business. That includes evidoers Greenspan and Bernanke. Economists must bear blame for this crisis.

    In the words of a Chick tract, when it came to economists foreseeing or averting the economic crisis, somebody goofed. But this should not condemn the entire enterprise of economics to the scrap heap. Whatever its faults have been in recent years, economics can still claim at least these few defenses against swarming critics of the discipline as such.

    1. Social science is complex. Recall Manzi’s “challenge” to economics: Manzi congratulates “real” science – that is, the physical sciences – on producing “useful, non-obvious predictive rules.” Why, Manzi asks, can’t economics produce such rules, too? Distilling reality into predictive rules is hard, and it likely gets harder as academics lop off greater and greater slices of reality to explain. By virtue of being “purer” than the social sciences, the hard sciences have limited themselves to explaining increasingly limited portions of reality, and even here their job isn’t done yet. So the least “pure” social science has before it the grandest task: to model systems which have been studied by layer upon layer of “purer” hard sciences. Thus, if economics hasn’t come up with infallible laws yet, perhaps it is because of the inherent complexity of markets and market behavior.

    2. Social science is probabilistic. The above suggests that given enough time and effort, economics will produce infallible laws. This is not entirely reasonable. Any useful (i.e., sufficiently simple) social scientific theory will have to be probabilistic, even given perfect information about how its subject works. Thus, economic theories cannot say that A will lead to B, but only that all else being equal, A will lead to an increased likelihood of B. Contrary to Manzi’s standard for theories, any social scientific theory can therefore be falsified (in the strict sense), given enough trials – but this fails to describe either its accuracy or usefulness.

    3. Social science dramatically alters that which it studies. Social science is complex and probabilistic in large part because its effects on its subjects are so profound. People are not bacteria in a petri dish – they are consumers of social science as much as they are its subjects. Manzi asks economics to produce rules that are non-obvious and predictive. Well, obviousness is a matter of perception – what is obvious to one is questionable to another, and impossible to a third. In economics, mercantilism was an innovative theory that became obvious and was later discarded. How many of the “obvious” things Manzi knows about the market had to be codified and disseminated by economists? And as for the predictive quality of economic theories, such theories exists in a complicated feedback loop with the phenomena they study, with successful or famous theories introducing great distortions in the social reality. To wit, just because a certain economic theory has predictive power today, it may not necessarily have such power in the future, once word of the theory gets around to workers, consumers, investors, or the government.

    The badge of a Ph.D. in economics should not shield its wearers from well-deserved criticism. However, disagreement with a particular thinker or theory should not be cultivated into a rejection of economics as a discipline.

    (h/t to beeveedee)

    The bailouts next time

    March 15, 2009

    Matt Zeitlin proposes a tiered system of regulation for financial institutions, on the premise that the biggest banks should not be taking huge risks just to make the wealthy wealthier.

    So, looking forward to how we want to regulate the financial sector, a few things seem obvious.

    One, impose a simple rule on financial institutions. Either, you can be big — so big that your insolvency would threaten the collapse of the world economy — and not do anything risky or you can be small and do whatever the hell you want. Another way to thread the needle here would be to require banks like Citigroup, or anything that’s “too big too fail,” to pay into a super-FDIC, essentially to buy bailout insurance, so that if and when they need to be bailed out, it’s not a huge, sudden expense on the taxpayer. Or you simply let hedge funds do all the exotic stuff and tell banks to, well, be banks. Or, hell, you could just not let financial institutions get too big. For example, you could say that investment banks have to be partnerships and not let them become publicly traded companies (and thus get so big) or, on a smaller scale, just limit how much leverage can be used.

    The merit of this proposal (or similar proposals) lies in how well it matches what the general public expects of banks. Many bank customers are just looking for a place to park their checking account, and may not necessarily care if their savings account earns .05% instead of .06% interest. What people do care about is 1. not losing their money, as in the Great Depression, and 2. not seeing billions of tax dollars go to prop up tottering banks, as in the present crisis. On a macroeconomic scale, however, bans on risky investments for large banks might depress economic activity, since less money will be moving around in high-risk transactions. High economic growth is, unfortunately, correlated with high risk, and the challenge of regulating financial institutions lies in striking a good balance between fostering growth while inhibiting risk.

    I think the best part of Zeitlin’s suggestion is the “super-FDIC” for the biggest banks – the ones most likely to serve the general public rather than niches of well-educated, risk-loving investors. A super-FDIC (an FDIC on steroids?) would offer a bank and its customers increased financial security, in exchange for prohibiting the bank from using deposits in unacceptably risky ways. Mandating membership in such an institution for banks above a certain size would be a nightmare: legislating the cutoff line and policing banks who might cross it from year to year would be just two likely difficulties in that scenario. However, a better approach might consist of letting banks above a certain size voluntarily join this ULTRA FDIC, and encouraging that member banks advertise their membership to their customers. This tweak to Zeitlin’s proposal does not eliminate entirely the plan’s dampening of economic activity, but it seems to achieve the goal of risk reduction in a way that might be more agreeable to legislators, and less constraining of capital.

    No percentage for Truth is given because the Daily Value has not been established

    March 13, 2009

    Jacob Gershman, a man who eats Cocoa Pebbles for dinner, reports on the latest trick food manufacturers have used to make sugary cereal (and similar foods) seem healthy:

    The fiber in Cocoa Pebbles comes from a little-known ingredient called polydextrose, which is synthesized from glucose and sorbitol, a low-calorie carbohydrate. Polydextrose is one of several newfangled fiber additives (including inulin and maltodextrin) showing up in dairy and baked-goods products that previously had little to no fiber. Recent FDA approvals have given manufacturers a green light to add polydextrose to a much broader range of products than previously permitted, allowing food companies to entice health-conscious consumers who normally crinkle their noses at high-fiber products due to the coarse and bitter taste of the old-fashioned roughage. These fiber additives serve dual purposes—they can serve as bulking agents to make reduced-calorie products taste better, such as the case with Breyers fat-free ice cream, and carry an added appeal to consumers by showing up as dietary fiber on food labels.

    With the First Lady exhorting Americans to eat healthy and nutritious foods, many may turn to the nutrition facts label to help them distinguish between virtuous and non-virtuous grub. Since 1994, the Nutrition Labeling and Education Act (NLEA) has ensured that this scientific-looking chart appears on nearly all foods Americans consume. However, as Pebbles-gate and other food crises show, the NLEA may have lulled consumers into a sense of false security about knowing what is in their food.

    Some of the most severe food crises in recent months have occurred because of ingredients that were not on the label. The discovery of Salmonella contamination in countless batches of peanut butter could not have been foreshadowed by consumers reading “S. enterica – 300% DV” off of the nutrition facts label. Similarly, the melamine with which food products have been adulterated in China would have shown up on the label as nothing more suspicious than a few extra grams of protein. However, the problems of the nutrition facts label go beyond mere omissions. By evaluating all foods on a short list of uniform criteria, the label fosters two dangerous attitudes: seeing all (or most) foods as interchangeable, and evaluating the virtue of foods on just one or two of the nutrients the label lists.

    Nutrition facts labels are often used with the idea that comparing two foods is as simple as comparing their labels. Now, when one is crafting a diet based on macronutrient ratios, sodium limitations, or micronutrient requirements, the information on nutrition labels can be useful, accurate, and relevant. But what about other aspects of food quality, safety, wholesomeness, sustainability, or even taste? Diet Coke boasts that it is 99% water, and the labels of the two are indistinguishable; meal replacement bars are available which mimic – on the nutrition facts label – a meal made with real, pronounceable ingredients; and Cocoa Pebbles now come with added fiber to emulate either granola or actual pebbles. In each case, the marketing pitch for the processed food is how it is nutritionally similar to unprocessed foods people might consume instead. Meanwhile, the nutrition facts labels on the processed foods give consumers no way to place these claims in their proper nutritional and environmental contexts.

    In theory, consumers can decide what to eat based on some ideal balance of Vitamin C, calcium, and cholesterol – three nutrients which appear on the nutrition facts label. In practice, however, much of the attention consumers devote to reading nutrition labels gravitates to just two nutrients: fats and carbohydrates. Many fad diets carve out their niche by coming up with a new way to restrict the intake of either fats or carbs. This page offers a much more detailed, if somewhat curmudgeonly description of the myths and dangers of such diets. In short, consumers seldom use all the information provided on the nutrition facts label to make food choices, tending to focus on two or three – sometimes even one – nutrients and ignoring the rest.

    Nutrition facts labels on food may well have changed the way America eats. Armed with precise knowledge of just a few attributes of each food, we are able to approach our diet as a simple linear programming problem. When that gets too time-consuming, we can take the shortcut to a healthy diet by merely checking if a food is low-fat or low-carb. The end result is that the well-meaning NLEA has created artificial demand for artificial foods. This suggests that future food policy should not only attempt to give consumers more information, but should also seek to summarize that information in ways that more accurately reflect the values – nutritional and perhaps also environmental – of the foods we are asked to buy.

    Guns stop crimes, but which ones?

    March 12, 2009

    The Alabama shooting spree, in which a gunman killed 10 before turning the gun on himself, is not only tragic, but also damaging to the widely-held notion that guns stop crime – all crime. In this case, the criminal committed murder after murder unmolested by law-abiding gun owners, even in a state that consistently ranks highly for gun ownership rates. How highly? A 2008 post at a gun-rights site quotes the proportion of Alabama residents who own guns at about 66%; while a Reuters dispatch of the same year gives a figure of 57.2% for Alabama households.

    These statistics illustrate the finding, often ignored in debates over gun control, that while widespread gun ownership might stop some crimes, it fails to stop and even exacerbates others. If every citizen were armed and instructed to remain in their homes, burglary rates would probably plummet to insignificance. In general, increased gun ownership might decrease the rates of crimes in which victims are chosen based on their inability to defend themselves, including muggings and burglaries (although see contradicting results here). But what gun ownership has failed to do, some claim, was reduce homicides. Miller, Azrael, and Hemenway write, in the 2002 volume of the American Journal of Public Health,

    Table 3 compares the actual number of homicide victims between 1988 and 1997 in the states with the 4 lowest and 6 highest firearm ownership rates. … In the “high gun states,” 21 148 individuals were homicide victims, compared with 7266 in the “low gun states”. For every age group of at least 5 years minimum age, people living in the high-gun states were more than 2.5 times more likely than those in the low-gun states to become homicide victims. These results were largely driven by higher rates of gun-related homicide, although rates of non–gun-related homicide were also somewhat higher in high-gun states. For all age groups, people living in high-gun states were 2.9 times more likely to die in a homicide; they were 4.2 times more likely to die in a gun-related homicide and 1.6 times more likely to die in a non–gun-related homicide.

    Opponents of gun control dispute the idea that more guns lead to more homicides, citing correlations between higher gun ownership and lower gun deaths at the national and cross-national levels. Setting aside the methodological difficulties of national and cross-national comparisons, there are two reasons why we should not expect increased gun ownership to reduce premeditated murder as it recently happened in Alabama and elsewhere: the nature of the crime and the bystander effect.

    Both gun owners and non-owners are legitimately concerned about “putting guns in the hands of criminals.” In a premeditated murder, the gun is literally already in the hands of the criminal. A citizen targeted by a premeditated murder must either have his or her gun out or else must quickly draw it to stand a chance against a homicidal gunman. But even in a state as armed as Alabama, shooting sprees don’t involve any OK Corral gunfights between victim and criminal. In the exceedingly rare cases where gunmen are taken down by citizens, it is usually a bystander and not someone being shot at who fires back.

    The marks of a gunman would typically rely on others to intervene and stop (or kill) their assailant. The bystander effect suggests that when many people witness or are in the vicinity of a crime, no one will help the victim. Even if the bystanders are armed to the teeth, they seldom attack the gunman. Here, the psychological explanation is supplemented by a social norm: in America, most (but not all) people still count on the police to respond to violence. Arming the citizenry might empower each person to protect themselves, but not to come to the aid of others.

    Increased gun ownership may have made deadly shooting sprees, and other, less-publicized homicides more likely. In the not-so-distant past, commentators on the Virginia Tech shootings have suggested that were the students armed, that tragedy could have been averted. As the victims’ families mourn in one of America’s most armed states, we can only hope that further mentions of this “solution” will fall on deaf ears.

    (Don’t Give Me That) Old-Time Religion

    March 11, 2009

    By way of Jack Cafferty, a recent study purports to show that lack of religiosity, while still uncommon in America, is on the rise:

    More Americans are saying they have no religion — according to a wide ranging study done by Trinity College.

    The survey shows 15 percent of those polled say they have no religion; that’s up from about eight percent in 1990. Northern New England and the Pacific Northwest are the least religious regions. And the number of Americans with no religion rose in every single state.

    Organized religion seems to be playing a smaller role in many people’s lives. 30 percent of married couples say they didn’t have a religious wedding ceremony, and 27 percent say they don’t want a religious funeral.

    Nonetheless almost 70 percent of those surveyed say they believe there is a God; and another 12 percent say they believe in a higher power but not the God of traditional organized religions.

    Some suggest that the rise in evangelical Christianity is actually contributing to the rejection of religion by other Americans. The survey shows about one in three are evangelicals. The number of evangelicals is actually increasing while the number of Christians overall is declining.

    The hypothesis that evangelical Christians are crowding out others from organized religion is an interesting one, and not too common in explanations of religiosity’s decline. Typically, religious dynamics are modeled as reacting to non-religious social processes. The classic example of such a model is the argument that increased scientific knowledge depresses religiosity. Another hypothesis (discussed by Jamelle here) is that the rise of the welfare state engenders a decline in religiosity. In these and other models, religiosity is exogenously determined: the religious behavior of any given individual is held to be affected by scientific or economic trends which lie beyond the influence of that individual.

    The “evangelical crowd-out” model, if may I call it that*, is a new sort of beast. Now, endogenous models of religiosity – that is, theories in which one person’s religiosity is determined by another’s – are not entirely novel. In the economics of religion, Laurence Iannaccone and his colleagues have been advancing for over a decade models in which religiosity is explained either as the outcome of an individual’s past religiosity, or of the religiosity of that individual’s fellow believers. The evangelical crowd-out hypothesis separates itself from this distinguished line of research in its lack of an explicit rational-choice foundation.

    To be sure, there may be rational or quasi-rational reasons for other Christians to let go of their faith in response to the evangelicals’ rise. One possibility is that evangelicals’ prominence is increasing the stigma associated with being a Christian. Another possibility is that the risk of being confused for an evangelical grows as evangelicals form a larger proportion of all Christians. But this does not seem to be the argument here. The “crowd-out” non-evangelical Christians are experiencing seems to be based less in spiritual economics and more in an ethical response to evangelical Christianity. The scope, too, is greater than the one in traditional economics-of-religion models, which deal with households or single sects. Here, the religious practices of another group – one that lapsed believers may seldom interact with – are affecting these (ex-)believers’ own religiosity.

    I have been describing the argument that evangelicals are crowding out non-evangelical Christians from organized religion as a hypothesis, a model, a theory. In truth, I am either not familiar with or there does not yet exist a formal sociological theory of religion which would encompass this idea. If you know of an actual model which depicts individuals’ religiosity as a function of socially distant religious practices, I would be glad to hear about it. Otherwise, do suggest what such a model might look like. How should the social sciences generalize from the observation that evangelical Christianity is driving non-evangelicals out of the broader religion?

    My own shabby attempt: ethically-motivated exit from organized religion should increase with the seriousness and pervasiveness of other believers’ misconduct, and should decrease with the social distance from the misbehaving other believers. This formulation explains why, for instance, Catholics might have left the church over the misbehavior of their priests, but not over William Bennett‘s gambling; and why Protestant exits were probably not motivated by either one. However, “ethically-motivated exit” demands a clear definition, and the proposition as a whole may be too limited in scope.

    * Emerging Christian attributes the idea to Mark Silk, but I have been unable to find his (or anyone else’s) original statement of this theory. Silk is quoted at greater length on the topic here.

    Mercantilism and intolerance, hard at work

    March 10, 2009

    Word is out that the economic stimulus may – horror of horrors – let undocumented immigrant workers fill as many as 300,000 jobs. As USA Today reports,

    Studies by two conservative think tanks estimate immigrants in the United States illegally could take 300,000 construction jobs, or 15% of the 2 million jobs that new taxpayer-financed projects are predicted to create.

    They fault Congress for failing to require that employers certify legal immigration status of workers before hiring by using a Department of Homeland Security program called E-Verify. The program allows employers to check the validity of Social Security numbers provided by new hires. It is available to employers on a voluntary basis.

    This news comes hot on the heels of Republican accusations that illegal immigrants would be eligible to receive stimulus checks. Predictably, conservatives are crying foul. Their disapproval seems to rest on two pillars: a bastardized mercantilism, and a lingering resentment of immigrants in general.

    The first conservative objection to seeing stimulus money go to (illegal) immigrants is the more surprising one. Michelle Malkin got an early start on making this argument:

    What will the illegal aliens do with their rebates? Remittances, baby, remittances.

    [quoted:]
    “This package will stimulate one thing for certain: more illegal immigration,” said [Rep. Tom] Tancredo. “It’s just the latest unfortunate example of American workers footing the bill for illegal aliens.”

    The bill would allow so-called “Resident Aliens” to receive rebate checks. The Treasury department classifies someone as a “Resident Alien” based on how much time that person has spent in the United States. No proof of legal presence, however, is required. The IRS’ explanation of the term can be found at: http://www.irs.gov/taxtopics/tc851.html

    “Worse, a large portion of this money will just be sent back to the home countries of illegal aliens,” concluded Tancredo. “So it might stimulate someone’s economy – just not ours.”

    I am genuinely surprised that mercantilism, an economic theory left for dead in the 19th century, is making a resurgence of sorts in conservatives’ critique of immigration. To be fair, there is big money in remittances, not only from migrant workers in the US but also from immigrants around the globe. The fear that (illegal) immigrants will send “American” money from their stimulus checks/wages to Mexico is not, therefore, completely illogical. What is illogical is the conclusion that Americans would never see that money again.

    Mercantilism turned out to be wrong in the first place because the amount of trade in the world economy isn’t fixed, and because trade flows turned out to be much more complex and circular than mercantilists could imagine. In the present context, a dollar sent in remittance to a household in Mexico might be used to purchase Chinese goods sold in Tucson, Arizona. Or, more realistically, it might be used in such a way that it frees up other money – in Mexico or elsewhere – to flow back into the American economy.

    Meanwhile, the internal mercantilism of sorts that has conservatives railing against immigrant workers in the first place is just as misguided. Suppose those 300,000 illegal immigrants materialized to take up “American” jobs. This does not mean that 300,000 Americans will have been put out of work. Illegal immigrants, as any workers, require the services of retailers, professionals, small businesses of every description, workers in the skilled trades, in manufacturing, in government, and so on. Employing any 300,000 people will create or sustain many jobs, including many which could accommodate Americans who are now unemployed. The number of jobs in a country, as the volume of international trade, is never a fixed amount.

    Employing undocumented workers – even sending them tax rebate checks – will eventually have some stimulative effect on the U.S. economy. Opponents of either approach would do well to move beyond claims of a permanent job shortage and a remittance pipeline draining money from America to other countries. Of course, that would probably require them to move on from the second pillar of anti-immigrant sentiment. I hope no one is surprised at the revelation that this pillar consists of simple racial intolerance.

    Yet efforts to prevent (illegal) immigrants from seeing one cent of the recovery package are not built on intolerance alone. They are built on intolerance and questionable economics.

    Rational irresponsibility

    March 9, 2009

    Matthew Yglesias is catching flak for suggesting that “irresponsible borrowers” should not be made to suffer for defaulting on their mortgages. Yglesias writes:

    I just don’t see how more than a tiny fraction of [the blame for mortgage defaults] could possibl[y] adhere to our electrician or teacher or secretary who’s decided, basically, that the financial services professionals and government regulators know what they’re doing. Now, could she have known better? Sure. She could have been reading Dean Baker and Paul Krugman and others. The idea that this lending was all being undertaken on a false premise that a nationwide housing bust was impossible wasn’t a highly guarded secret. I was, for example, familiar with the chart above and with the analysis suggesting that a bust was, in fact, likely. And I believed that analysis. But at the same time, I write about U.S. public policy debates for a living. If there’s a dissident line of thinking that, despite its general unpopularity, is popular among left-of-center economists—well, that’s the kind of thing I know a lot about. But our nurse? Why would she know?

    The conservative response has been to chide Yglesias for “discounting the common sense of borrowers” and putting it above borrowers’ ability to “understand their own situation”. By this line of reasoning, some of the default mortgages were taken out by people who knew they could not pay them down. The irresponsibility of such behavior is beyond question, and thus should not be rewarded in any homeowners’ bailout.

    Or so it seems. Beneath its surface of common-sense, this argument amounts to stating that when it comes to homeownership, no (sizeable) risk is acceptable. One should take only a mortgage one could afford, or, better yet, afford twice over. Never mind that homeownership can mean attaining economic, social, and cultural capital – you know, capital that gives future generations a chance to climb up the SES ladder.

    In business, finance, investing, and allied pursuits, risk-taking is typically rewarded and hailed as the engine of progress, even when it involves seemingly irresponsible decisions. Recall that Warren Buffett, the living embodiment of calculated and successful risk-taking, advised Americans this past October to invest in tanking stocks. His advice is worth recounting at length:

    You might think it would have been impossible for an investor to lose money during a century marked by such an extraordinary gain. But some investors did. The hapless ones bought stocks only when they felt comfort in doing so and then proceeded to sell when the headlines made them queasy.

    Today people who hold cash equivalents feel comfortable. They shouldn’t. They have opted for a terrible long-term asset, one that pays virtually nothing and is certain to depreciate in value. Indeed, the policies that government will follow in its efforts to alleviate the current crisis will probably prove inflationary and therefore accelerate declines in the real value of cash accounts.

    Equities will almost certainly outperform cash over the next decade, probably by a substantial degree. Those investors who cling now to cash are betting they can efficiently time their move away from it later. In waiting for the comfort of good news, they are ignoring Wayne Gretzky’s advice: “I skate to where the puck is going to be, not to where it has been.”

    Buffett wasn’t talking about investing in a home, of course, but signing up for a presently-unaffordable mortgage does not seem so distant from buying up stocks in the midst of an economic downturn. In fact, less than two years ago, Peter Coy at BusinessWeek reported that the best mortgages for homeowners were the most “toxic” ones – assuming homeowners could act rationally. In at least one form, shouldering “irresponsible” risk in one’s mortgage was considered by the financial establishment a reasonable investment. Viewed through the broader prism of inequality, taking on risks to become a homeowner can be construed as perfectly rational given the immense advantages which have historically accrued to homeowners and their descendants.

    There are two reasons not to punish the “irresponsible” contingent of beleaguered homeowners struggling to pay their mortgages. One is the mix of empathy and consumers’-rights principles Yglesias invokes. The other reason, obscured by the first, is that the pursuit of homeownership, however risky, might be as close as real-world economic actors get to rationality in an environment where owning a home is the best and quickest path to the good life. “Rewarding irresponsibility,” in this case, becomes the much more sensible “rewarding rationality.”