technology

Filter Bubble

Eli Pariser (of MoveOn.org, which I dimly recall was a relevant site when Bush II was on the throne, but seems to have lost some luster since Obama turned out to be Bush III) writes about the way the internet is becoming personalized, and how that distorts our view of reality. He says, “In polls, a huge majority of us assume search engines are unbiased. But that may be just because they’re increasingly biased to share our own views. More and more, your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click”(p. 3). Not only do vendors like Amazon and Netflix use your purchase history to try to predict what you might want to buy next (which seems legitimate, and no more than an on-the-ball shopkeeper in the bricks and mortar days would do), but increasingly info you don’t know is being collected about you is in play in ways you don’t realize.

In commerce this info is gathered by “data companies like BlueKai and Acxiom, [which has] accumulated an average of 1,500 pieces of data on each person on its database—which includes 96 percent of Americans” (p. 7). Credit card companies have been profiling us based on what we buy for decades, now cellphones can report on where you go, too. This has some unpleasant implications in terms of direct marketing, and some possibly scary implications in terms of surveillance. But Pariser believes it also has a negative impact on social bonds and even on epistemology.

The filter bubble is Pariser’s name for the feedback loop of information that surrounds us, as even Google searches we believe to be unbiased are increasingly tailored to our personal profiles. Although he admits we’ve always selected media that appeals to our interests and preconceptions (hence the echo-chamber cable news channels), Pariser says the filter bubble is different in three ways. “First, you’re alone in it.” Each personal news feed or search is tailored specifically to you, so you’re no longer even part of a narrow affinity group. Second, it’s invisible. “Google’s agenda is opaque. Google doesn’t tell you who it thinks you are or why it’s showing you the results you’re seeing. You don’t know if its assumptions about you are right or wrong—and you might not even know it’s making assumptions about you in the first place” (p. 10). Finally, Pariser says while the viewers of politically slanted media are (presumably) aware there are a range of options and that they’ve chosen one of them, as the filter bubble gives us more and more seemingly objective positive reinforcement in our preferences and prejudices, we begin to believe the world is like us.

Because even web searches from commercial sites like Google are increasingly tailored to each user’s profile, Pariser says we are less likely to be exposed to a rich variety of ideas. Politically, this would tend to make us even more obnoxiously American than we already are. But he also claims that it will hinder creativity by promoting a more passive style of info gathering, and by narrowing what he calls “the solution horizon, the limit of the conceptual area that we’re operating in” (p. 95). It’s hard to be outside the box, Pariser believes, if the box is narrow and invisible. If innovation comes from the juxtaposition of unrelated ideas and from some type of creative cross-pollenization that happens when people expose themselves to unfamiliar stimuli, then we could be headed toward a generation of accountants. And this change has been noticed even by some techies: “The shift from exploration and discovery to the intent-based search of today was inconceivable,”Pariser quotes an unnamed Yahoo editor lamenting (p. 103).

Some of the comparisons Pariser makes between Google (which profiles you based on your click history) and Facebook (which profiles you based on what you share) are less heavy than his argument above. You might even call them trivial. But the point is still worth remembering, “If you’re not paying for something, you’re not the customer; you’re the product being sold” (p. 21). Overall, though, I think Pariser overstates the danger of the filter bubble because just like the techno-evangelists he criticizes, he overestimates the importance of the technology. The box isn’t invisible – the box is the commercial internet. Creative people have no trouble seeing that. The problem, which Pariser gets close to and then misses, is that we’re training a generation of people not to be creative. 36 hours a week the average American spends watching TV. Switch that to surfing the web and you’ve still got the same problem.

It’s a good book and a quick read. Pariser asks some provocative questions. But he doesn’t offer a lot of solutions. A government regulatory agency that supervises these data collectors
does not sound like a good idea to me. The only people I want to have my personal info less than salesmen are bureaucrats. Pariser mentions the movie Minority Report – I’m thinking Enemy of the State. RFIDS cost about a nickel apiece, and it’s been nearly 15 years since I sat in a presentation by a semiconductor manufacturer’s rep (I think from National Semi) who talked about all the ways they were thinking of deploying them. So what are some ways of getting out of the filter bubble?

First, limit the amount of info you’re giving away. Assume you’re always being watched, and act accordingly. Don’t carry a smartphone everywhere. Use cash. Search on something other than Google. Use TOR or some other anonymizing web service. Get off Facebook. Remember that everything you post to any website you don’t personally own probably becomes someone else’s property, and that the stuff you post on your own site can be copied and saved by anybody. Forever. And from the network perspective, it’s never been easier for regular people to communicate, and it doesn’t have to be through the commercial web. WIMAX base stations are cheap, and can connect entire towns and cities into networks that don’t depend on the AT&Ts and Time Warner Cables of the world. Those networks won’t have Netflix or YouTube on them (or much porn, either), but if that’s all we’re really looking for, then it’s already too late.

Heroes or Machines?

Alf Hornborg
The Power of the Machine: Global Inequalities of Economy, Technology, and Environment
2001

“Like all power structures,” Hornborg begins, “the machine will continue to reign only as long as it is not unmasked as a species of power.” If only it was so easy. We may realize that the emperor is naked, but that doesn’t stop him from being the emperor.

Hornborg’s analysis is built on two big ideas. The first is a definition of power as “ a social relation built on an asymmetrical distribution of resources and risks.” (1) When I read this today, the image that came to my mind was Beowulf (but I couldn't find a good pic, so here's Aragorn). Risks can either be taken or imposed. When you take a risk, you accumulate honor. When you impose a risk on someone else, you accumulate power.

The second is the idea that beyond the cultural construction of our idea of “the machine,” there are
actual machines. And Hornborg says, “the actual machine contradicts our everyday image of it.” Hornborg believes “the foundation of machine technology is not primarily know-how but unequal exchange in the world system, which generates an increasing, global polarization of wealth and impoverishment.” (2) We believe machines embody progress, and an escape from Malthusian disaster. But “We do not recognize that what ultimately keep our machines running are global terms of trade. The power of the machine is not of the machine, but of the asymmetric structures of exchange of which it is an expression.” (3)

The way machines concentrate resources from the periphery into the center, while seeming to be making something out of nothing, is by keeping our attention firmly focused on that center. To prove his point, Hornborg cites the Second Law of Thermodynamics, and Ilya Prigogine’s elaboration of it in his theory of Dissipative Structures. Increases in order, which Hornborg calls negative entropy or negentropy, are only possible locally, and are taken out of the wider environment. “Any local accretion of order,” Hornborg says, “can occur only at the expense of the total sum of order in the universe.” (123) In the case of biomass, the energy to create this order is taken from sunlight by photosynthesis. This isn’t a completely efficient process, but it hardly matters on a human scale (so far). Where the entropy law becomes really important, though, is in the creation of what Hornborg calls “technomass” out of non-renewable resources. This is not only a zero-sum game, Hornborg says, but it has distributional implications that are deliberately, “systematically concealed from view by the hegemonic, economic vocabulary.” (3)

“Industrial technology,” Hornborg says, “depends for its existence on not being accessible to everyone.” Industry presupposes cheap energy and “raw material” inputs, and high-value outputs. Entropy insures that there isn’t enough to go around. “The idea of distributing [technology] evenly among all the peoples of the world would be as contradictory as trying to keep a beef cow alive while restoring its molecules to all the tufts of grass from which it has sprung.” (125)

What are the historical implications of this bleak argument? Well for one thing, once machines and the exchange relationships they use and represent “assumed the appearance of natural law…the delegation of work from human bodies to machines introduced historically new possibilities for maintaining a discrepancy between exchange value and productive potential, which in other words means encouraging new strategies for underpayment and accumulation.” (13) Why? Because while it is relatively easy to recognize the basic justice that an individual owns his own work, it’s harder to say who should own the work of the machines built with (cheap) resources and (cheap) labor bought far from the high-priced central markets.

This was the thing that Marx missed, either because it was harder to see in his time, or because (as Hornborg suggests) he “fetishized” machines and expected them to solve the historic problem of the proletariat (there’s a whole chapter redefining Marx’s theory of fetishism and applying it, but I won’t go there now). At some point, Hornborg says, global growth became primarily based on “
underpayment for resources, including raw materials and other forms of energy than labor.” Hornborg replaces Marx’s labor exploitation with resource exploitation as the central factor in capitalist accumulation. This change may be the bridge from a traditional Marxist critique of capitalism, to a “green” critique. Money values may increase and the illusion of global economic growth may temporarily hide the zero-sum nature of the game, but in the long run “what locally appears as an expansion of resources” turns out to be “an asymmetric social transfer implying a [hidden] loss of resources elsewhere.” (59)

Another implication is that, historically and “still today, industrial capitalism is very far from the universal condition of humankind, but rather a privileged activity, the existence of which would be unthinkable without various other modes of transferring…resources from peripheral sectors to centers.” (60) This should impact discussions of the “market transition” in history just as it affects our understanding of contemporary economic development.

The other major implication, for me, is that locality is important. In nature, systems tend to regulate themselves. “As long as a unit of biomass is directly dependent on its local niche for survival, there will tend to be constraints on overexploitation and a long-term (if oscillating) balance. Industrial growth, however, entails a
supra-local appropriation of negentropy.” (123) The concept of capital breaks this local ecology, and creates what Hornborg calls “a recursive (positive feedback) relationship between some kind of technological infrastructure and some kind of symbolic capacity to make claims on other people’s resources.” (61) When capital can begin to be accumulated far from its source, we’re on our way to a world where “the 225 richest individuals in the world own assets equal to the purchasing power of the 47 poorest percent of the planet’s population.”

So how should historians respond to this? One possible response might be to point out that people in the past did not necessarily
know what thermodynamicists now know, and what Hornborg argues applies to society. So, there was not necessarily the same sense of a conspiracy to evade this knowledge in the past, that Hornborg suggests there is in the present. After all, enlightenment rationality grows out of and responds to the longstanding medieval world view, in which people took for granted that the world had been created and peopled for a purpose.

Another reaction might be to argue that Hornborg is wrong: that in fact dissipative structures do not make the center-periphery relationship look like he describes it. Critics could argue that technology actually breaks the zero-sum nature of the game, and acts as a rising tide that lifts all ships. Different data sets could be assembled, to support both sides. So if theories like these can never be conclusively proven, what’s the solution? Locality? Honor instead of power? Maybe history, seen through the lens of thermodynamics, will provide some illustrative stories…