Corporations, the environment, and Atlas Shrugged

Corporations are, by their nature, the enemies of sustainable environments. That’s not because corporations are from the “dark side,” or because the people in them don’t care about the environment. It’s because corporations are legally structured to meet certain goals. Specifically, to produce returns to shareholders (dividends from corporate profits) and to increase shareholder value (stock price). The realities of corporate finance and economics require corporations to focus on returning dividends or growth to equity on a regular — and for most corporations that means quarterly — basis.

The production of quarterly growth or profits to distribute to shareholders is incompatible, in unavoidable ways, with environmental stewardship, sustainability; or in many cases, ironically, even with long-term growth and profits. So unless there are other mechanisms in place to make sure the environment is protected and carefully used to the advantage of all the “stakeholders” in society, it won’t happen. You can’t blame corporations for failing to do something they were specifically designed
not to do.

The typical corporation does not contain tools or mechanisms to allow it to focus on the long-term or on the social or natural environment beyond the thing it was particularly chartered to do: make a specific product for a specific market. That doesn’t mean corporations are evil, it just means that they’re limited. Society needs
other organizations that are designed to address these issues.

Evil only comes into the picture, when corporations or their champions try to prevent anyone else from speaking on behalf of these other social interests the corporations were not designed to address. It gets a little sketchy, when the corporations try to use government to shut down unions, or to let them drive the agenda on health & safety or environmental issues that are clearly at odds with the short-term growth and profit goals they were designed to pursue.

But again, you can’t blame corporations for doing what they were built to do. No matter how sympathetic to workers’ needs or the environment a corporate manager may personally be,
his job requires him to put growth and profits first. That’s called fiduciary duty. Managers are legally liable if they don’t choose the shareholder’s interests over all others.

So a corporation’s managers and spokespeople should be expected to object when someone proposes a union, consumer protection or environmental regulations that will reduce their quarterly growth or profits. But this isn’t the end of the conversation, it’s just the beginning. That’s their job, and
it’s the job of society to make sure that theirs isn’t the only voice in the conversation.

It should be a social dialog, not a corporate monolog.

Unknown
The Ayn Randian, Atlas Shrugged assumption that there are no problems that cannot be solved by rationally self-interested individuals negotiating with each other, denies the possibility of this social dialog.The Atlas Shrugged point of view assumes that if steel-industrialist Hank Rearden discovered that the building of a particular factory or the siting of a particular mine would, for example, lead to the extinction of a species of owl, that Hank would do the right thing. Actually, it defines “the right thing” as whatever Hank chooses to do.

If Hank happens to be a bird-lover, or values biodiversity, then he is free to (and
right to) move the factory or mine to somewhere else, or find a different way to get the job done. If however, Hank doesn’t really give a crap about birds, he’s equally right to go ahead and wipe out the owls. “People” aren’t able, in the Atlas Shrugged world, to legitimately argue that Hank’s actions are detrimental to “society,” because those ideas are collectivist, and therefore off limits.

The reason this is important is, that underneath it all, a lot of the people arguing for the political right these days wish they were living in the post-strike world of
Atlas Shrugged. They want to live in a world where there is no legitimate voice to speak against the voices of corporations. This is ironic, because many of these people are politicians, and they’d be the first to go if the Randian revolution ever actually took place.

The other thing that doesn’t work about the anti-collectivist Ayn Rand world-view, when you try to apply it to reality, is that concentration of wealth happens. And it happens, ironically, in a couple of very collectivist ways.

The first collective that the
Atlas Shrugged world-view empowers is the family. And even Rand recognized this. James Taggart inherits the railroad empire of his grandfather, and is not worthy of it. Okay, Francisco is the “triumph of the D’Anconias,” but the contrast between James and Francisco, that Rand repeatedly calls attention to, only proves that the inheritance game is a crapshoot. Even Andrew Carnegie knew this, and recommended against leaving fortunes to your children in his “Gospel of Wealth” — although he did not take his own advice, in the end.

The second, more dangerous collective that’s enabled by
Atlas Shrugged involves the way real-world wealth is often accumulated. In the story-world, of course, fortunes are always and only made by creativity, discovery, or working harder than your competitors. But when you try to apply Rand’s hyper-individualism to this world, something else happens.

Imagine a world where everyone was only allowed to act in his individual, personal interest. Ignore for the moment that corporations are groups of people (collectives) that legally “pretend” to be a single individual. Imagine that you can only act for your own benefit — that society rejects group interests, and only recognizes individuals.

What would this world look like? Going back to the Hank vs. The Owls scenario: there would be no way a
group of people could argue that preventing owl extinction was worth more than a factory or mine. There would also be no way to say that workers should be paid a particular wage, or that the environment should be treated in a particular way. Why not? Because the pain is distributed, and the profits are concentrated.

This is the world that
Atlas Shrugged’s anti-collectivist ideology leads to. Say a corporation (a legal individual) wants to dump its chemical waste into a river, causing each of the people living downstream to have to buy bottled water instead of drinking river water. The corporation saves the millions it would have spent cleaning up after itself, and the people are slightly inconvenienced — lets say, $25 a month each, in bottled water costs. Okay, now in a world of only individual action, who’s going to step up to battle the corporation to save themselves $300 a year?

The question for Randians is, isn’t making a lot of people’s lives a little shittier so that a few people can get extremely rich
the worst form of collectivism?

Modern politicians sometimes call this “socializing the costs and privatizing the gains.” I’m not sure whether they really get it, or have a solution, though. And it happens more often than we’d like to admit. For example, studies are beginning to show that high fructose corn sweeteners are bad for people — a negative effect that is not only widely dispersed, but delayed in time and uncertain in its extent. So how do we talk about responsibility? And then, of course, there’s cigarettes.

But it’s funny, because this was also something Andrew Carnegie knew all about. He was a big benefactor of libraries and schools, but he also regularly cut his workers’ wages. Carnegie believed that cultural institutions like museums were important, and that the workers would just spend the money on something frivolous if he let them have it. So, he cut their wages, because
he could, and built symphony halls with the money. Was society better off? Maybe. But let’s not pretend this was democracy — it was aristocracy. And that’s in Atlas Shrugged, too. Remember, Ayn Rand was originally Alisa Rosenbaum, from czarist St. Petersburg.

More thoughts about history

Allan Kulikoff wrote about history in the academy in June’s Journal of the Historical Society. Made a lot of interesting points. Only 2% of current undergrads major in history, he says. A historic low.

Kulikoff narrates two hypothetical career tracks, one based on the way things are, the other based on changes he’d like to see made. Interestingly, the subject of the first story leaves the academy and uses her knowledge to write bestselling historical novels. The second, luckier student publishes “a biography of an early twentieth-century female French architect…to substantial acclaim.” Guess who’s seen as the winner and who’s seen as the loser in this little morality tale?

It’s ironic, because further on in the article, Kulikoff suggests that departments should prepare grad students for a more useful role in a changing world, by stressing public history. His public history, however, is apparently limited to museums, archives, and historic preservation. Kulikoff doesn’t seem to consider writing history for the public as something young historians ought to do, although he does suggest departments should “Encourage tenured faculty to write for the public and count those books in promotion to full professor.” But why didn’t it occur to him that a (former) historian could take insights about causality and human motivation, along with interesting historical facts, and write popular books that might achieve goals similar to those of public historians? What is it about the historians’ project that scholars like Kulikoff believe
must be done by (senior) professional historians, through traditional (and especially through non-fiction) writing?

Textbooks and Online Teaching

books
Jonathan Rees, a professor at Colorado State University, wrote an interesting post for THS about ditching the textbook for college surveys. I think that’s very timely, since I’ve been reading about High School textbooks. I’ve TAed for a couple of surveys. The one that used the textbook, I’d have to say, stuck closer to being a survey. The other one, which used a set of primary source readings posted on Blackboard, ended up being about the professor’s interests and priorities. Although these were interesting, the class was so unbalanced and so obviously not a survey of US history, that I think the students were not well served. So the one argument for a textbook, I guess, is that having a document that guides the curriculum might prevent an instructor from hijacking the course.

On the other hand, textbooks are boring and no one reads them, as Rees says. And he’s probably right, they’re filled with too many facts. Maybe because, as James Loewen suggests, they shy away from causality. In that case, what’s the solution? Better textbooks?

What’s standing in the way? Pretty obviously, the textbook industry
. So why not make your own?

Lulu has recently started marketing itself to educators, as a source of “education on demand” self-publishing solutions (
http://www.lulu.com/education/). Seems like it would be fairly easy to put together a “course-pack” for a survey, if not an actual textbook. The difference between publishing a course-specific textbook (or collection of readings) and putting course materials on an institutional Blackboard site might not be apparent to people thinking only about the students in the desks in front of them. The importance of self-publishing is that a textbook (paper or electronic) can get outside the walls of that classroom. It may even get into the hands of regular people, who can’t afford the time or money to matriculate and take the course…

Jonathan Rees runs a history blog called
More or Less Bunk. Recently, he posted a piece called “Can they make you teach online?” It sparked a lively discussion in the comments, where Rees’s position was challenged by a professor at an online, for-profit institution. While I thought this was interesting, the whole argument, as far as I’m concerned, was taking place inside the box. I think the game is changing, and the question will soon be, can they stop us from teaching online? More about that very soon...

Alienation

cover0011_2
I’m feeling a little alienated at the moment, which reminds me that I’m kind of “good at” alienation. When I wrote OTB, I set up my website based on the domain “stayoutsidethebox.com,” as a reminder to myself. As I’ve been working my way through courses, qualifying exams, and the dissertation, I’ve — maybe unavoidably — been a little more INSIDE the box than I’m completely comfortable being.

I think it’s significant that Harry Potter never has to kill anybody. But what type of myth — what type of worldview — does this give to a generation? We’re at war, after all. Unlike Harry Potter, history teachers are not supposed to be presenting escapist fiction. But if Jim Loewen is right, it’s not at the level of facts, but at the level of myth that high school history really operates. So what about alienation?

Young adults are expected to rebel, for a little while, from the society of their parents. But typically, they’re then expected to accept (and ultimately inherit) the authority of the people and institutions they had rebelled against. This myth of cyclical youth rebellion is based on the myth of progress. What happens to it, if we admit there are limits to growth? If society’s current behaviors are unsustainable, should we be reassuring ourselves and our children that their alienation is just a phase that they’ll outgrow?

Maybe Rowling wasn’t self-consciously constructing myth for contemporary kids. Tolkien probably was. Lewis definitely was. Gaiman, LeGuin, Miyazaki — yep.

This is probably where it’s at. So, what about myths in history? They’re there. Maybe we ought to be explicit about them.
More on this at THS blog.

Then as now

This is the masthead of Abner Kneeland's Boston Investigator, 1831. It's a press, with the legend, "Tyrants Foe, The Peoples Friend." Kneeland was convicted of blasphemy in 1838, served a prison sentence, and then moved away to Iowa.

BostInv1831

Education and rights-talk

Censored2011-193x300
I was reading a snippet from Project Censored’s book, Censored 2011 this morning, as I was waking up. It was a revisit to their story in last year’s volume, claiming that “US schools are more segregated today than in the 1950s.” The authors say that the mainstream media has picked up this story, at least in a couple of places like California. They call attention to “the inability of Americans to see the connection between racial and economic segregation,” which interests me. I’ve just finished listening to James Loewen’s Lies My Teacher Told Me, which claims there’s a taboo against mentioning class in American history textbooks. And Martin Luther King’s later speeches spring to mind. So I think they’re onto something here.

But then they make an unfortunate turn. The article concludes with the sentiment, “Sadly, the US will never be able to thoroughly discuss these issues unless government and community leaders begin a dialogue aimed at explaining that education is a right as important as any other.” I think this is a bad strategy, for a couple of reasons.

First, the issue should not be taken on by government and community leaders. It should be taken
to those “leaders” by students and their parents. It might turn out that those “leaders” are part of the problem, and thus not suited to spearheading the solution.

And in any case, the government-action focus, like the rights talk, steers the reform bus right into the stone wall of conservatism. Critics of big-government add their power to the critics of rights-arguments, and the issue of segregation (or unequal access to educational resources and opportunity) takes a backseat.

But the question isn’t one of rights. You can believe that we all have rights to education, the means of production, adequate healthcare, a living wage, etc.; and support better education for everyone. But you can also believe we all have only the basic rights of life, liberty, and property guaranteed by a conservative interpretation of America’s founding documents; and still support better education for everyone.

The issue isn’t rights. It’s budget priorities. The question isn’t philosophical, it’s practical. Given that we live in 2011, with the government and political economy we have, what should we do? The US federal government spent
$3.5 trillion in 2010, about 3% of which went to education. State and local governments spend an average of 29% of their budgets on education. Is this money being used to its best advantage? And, can proponents of better and more equal education find more money in these budgets, by arguing that education is more important than many of the other things the money is being spent on?

Personally, I believe that education is so important that it transcends even being a personal right. It’s a social responsibility, in any society that wants to survive in the long run. But that’s an ideological position that isn’t relevant to getting more money for education. We need to be attacking other spending, not trying to get conservatives (or those who espouse conservative ideology to support their policies) to have a philosophical awakening.

People claiming to be small-government conservatives often have little trouble justifying massive government spending in a whole range of areas. So rather than duke it out over whether governments
should spend money, we might do more good by engaging on what the government is buying with our money. If nothing else, this would force opponents of education to actually oppose education, and advocate for whatever pork-barrel they think is more important.

That is, unless what we’re really looking for is an ideological “lost cause” that we can complain about without ever taking real action…

Why College?

Read Louis Menand’s New Yorker article on “Why We Have College,” after Randall mentioned it on THS’ blog.

College is the place where society identifies intelligent people, says Menand. So, the inducement to enter the contest for the student is presumably the prize you get if you’re selected. What is that prize? Has it changed over time? How many people are needed vs. The number that apply (what are the odds of winning)? Has this changed over time?

Or, college is a place where certain people receive a socialization, that puts them “on the same page…” but on the same page with whom? For what purpose? Why do we want “a society of like-minded grownups”?

Menand says that in the 1930s, elite schools like Harvard and Yale “were largely in the business of reproducing a privileged social class.” After WWII, he says, standardized testing “insured that only people who deserved to go to college did.” In contrast to earlier periods, by 1970 Harvard’s acceptance rate was only 20%. “Last year, thirty-five thousand students applied to Harvard, and the acceptance rate was six per cent.” Menand mentions that schools like Harvard are now recruiting from an international pool, but he doesn’t mention the socio-economic backgrounds of the acceptees, so we have no way of knowing whether the changes of the 1970s he describes had an effect on outcomes, or merely on the number of applicants.

The overwhelming growth in college attendance since WWII has been at public schools. “In 1950,” Menand says, there were about 1.14 million students in public colleges and universities…Today…almost fifteen million.” Menand goes on to say that students who are better prepared for college and take tougher programs (“courses requiring them to write more than twenty pages a semester and to read more than forty pages a week”) tend to do better. But once again, he does not specify the demographic status of these students, so it’s unclear whether they are even at the same institutions as the poorer performers. This is especially problematic when he claims that Liberal Arts majors performed better than technical or business majors. Are we comparing English majors at Bates with Accounting majors at Westfield State?

Maybe the best argument for LIberal Arts education is the one Menand doesn’t deliberately make, “liberal education is the élite type of college education: it’s the gateway to the high-status professions.” He observes that “Students at very selective colleges are still super-motivated.” Of course they are, the system is working for them and they can see a bright future. Menand warns that the flowering of inclusion that began with the GI Bill and continued through coeducation and racial integration, may be coming to an end, which is a point well taken. As is his point about the growing divide between the top schools and the rest. But he never really identifies causes and effects. He doesn’t mention that the millions of middle-income jobs (white collar, in skilled trades, and in self-employment) these newly-included people looked forward to have been substantially reduced by globalization and the financial crisis. Still, if you look at the things the kids who expect to succeed in life are doing, based on the data Menand cites, there seems to be a positive correlation between working hard and doing well. The task of educators, is to convince students of this.

Why PhD?

I posted a second comment on HCR’s post about successful grad students, and then deleted it because I thought it was too much of a personal observation. This was it:

I noticed the concept in a recent article on Inside Higher Ed, that colleges "buy" students with programs, amenities, prestige, etc. Does this become a self-fulfilling prophecy: if the "stars" are assumed to have gone to the "best" schools that could afford to attract them, do the other schools tend to treat their students differently based on their idea of their place in the pecking order? Regardless of why their students may actually have come to them, based on changing conditions, goals, etc.?

The basis of this comment, of course, is my own experience as a PhD candidate. And part of my concern stems from the competition between prestige and merit, especially where merit comes mixed with iconoclasm or general “otherness.” In English: do conventional measures of success (the name of the school on your diploma, your undergrad GPA, your GRE scores) substitute for real measures of interest and ability, when people are making decisions who to accept into a program, who to hire, etc.?

But let’s pass by that, and stipulate that although it’s an ongoing tension (and it’s certainly the job of the non-traditional candidate to point out the merit if s/he doesn’t have the prestigious credentials), it’s a problem most people are aware of and at least trying to correct for. There’s another question, that I think is less personal and wider-reaching: institutional mission.

If we agree that there are too many history PhDs for the available jobs, and that this is a long-term issue, then we need to either adjust our understanding of what the PhD is for, or reduce the number of PhDs. I think it’s hard to win the argument that a history PhD in its present form is the ideal training for a wide variety of jobs. The general argument made for liberal arts undergraduate programs, that they prepare you for a wide variety of careers, does not really apply. So, where does that leave us?

Specific subject-matter expertise is one of the results of the level of scholarship generally associated with PhDs (although we should admit that highly motivated amateurs often achieve it as well). My friend TA will emerge from his dissertation process as an expert on the Civil War, and there’s always a pretty good market for Civil War history. Likewise, Scott Reynolds Nelson (who gave an interesting talk at my school this year) found his ideas in great demand in the banking community, after his prophetic
article on the Great Depression in the 2008 Chronicle of Higher Education. So, if you pick the right material, and understand why it might interest people and how it might be relevant to the present, there’s a chance of finding a popular audience.

The most important role of PhDs, it seems to me, is to teach people who are not going to be PhDs. Undergraduates, whose need for an understanding of their history is reflected in its inclusion in general education requirements at most schools. And Masters-level graduates, who may need an even more detailed understanding of some aspect of history in order to do their jobs. And especially people training to be secondary teachers, since they will be responsible for teaching the majority of Americans everything they’ll ever know about history.

What about the role of PhDs in teaching new PhDs? I think that’s a little more complicated at this point, partly because we don’t need as many PhDs as we’re getting. And partly because I’m not sure whether PhDs should be “taught.” Certainly there are teaching, research, and writing skills to be passed along. But how much of our PhD programs actually concentrate on this “craft” aspect of the job? And the other parts of the process (reading and processing the historiography; wrestling with the post-modern challenge; finding a research topic, applying theory to formulate it, researching it, and writing) seem to be done best, when they’re done as independently as possible. The results of these processes would probably be improved, by seeking candidates who could do them with little hand-holding, and then turning them loose to do their work. Perhaps this is my own bias, but I tend to associate excessive shepherding with the development of disciples rather than independent scholars.

How does this relate to grad student success? I think it means that grad students need to think carefully about their goals, and come up with their own definitions of success. History departments, I think, need to reassess their roles and missions as well. Are they training PhDs because that’s what they’ve always done? Because the Organic Chemistry Department and the Economics Department are training PhDs? Because their faculty expect to and demand to train PhDs?