Science

Gleick: Info and Meaning

The Information, part 2

Richard Dawkins’ fundamental contribution to science, says Gleick, is the idea that “Genes, not organisms, are the true units of natural selection” (Kindle Locations 5328-5329). He cites
The Selfish Gene, which I really ought to get around to reading again soon. But then he takes it someplace I’m not sure Dawkins intended (although given Dawkins’ conclusion that memes act like genes in the real world, maybe he did…), and suggests that genes are not in fact the strings of base pairs seen under the microscope. They are ideas. After all, Gleick says, “There is no gene for long legs; there is no gene for a leg at all. To build a leg requires many genes…[and what about] more complex qualities—genes for obesity or aggression or nest building or braininess or homosexuality. Are there genes for such things? Not if a gene is a particular strand of DNA that expresses a protein. Strictly speaking, one cannot say there are genes for almost anything—not even eye color. Instead, one should say that differences in genes tend to cause differences in phenotype (the actualized organism).” (Kindle Locations 5414-5421). So what are genes? The information? Or the observed changes in phenotypes that result? Gleick concludes, “The gene is not an information-carrying macromolecule. The gene is the information. (Kindle Location 5462). But what we observe depends on our focus, our values. So once again it’s a confusion between information as signals and information as meaningful data we care about.

Aside: have memes already jumped the shark?

In his section on probability and entropy, Gleick mentions that an infinitely long random string will ultimately include every possible combination. “Given a long enough random string, every possible short-enough substring will appear somewhere. One of them will be the combination to the bank vault. Another will be the encoded complete works of Shakespeare. But they will not do any good, because no one can find them.” (Kindle Locations 5814-5816). But isn’t that the point, if we end up saying the universe is information (which is where this is going)? Because
Shakespeare DID find them

“Researchers have established that human intuition is useless both in predicting randomness and in recognizing it. Humans drift toward pattern willy-nilly” (Kindle Locations 5819-5820). See
Rosencrantz and Guildenstern Are Dead. Pi is not random, because it is computable. But if you took the digits between say 1,000 and 2,000,0000 in the string, wouldn’t THAT be a random number? So, in the real world, where context and completeness are not always discernible, don’t we get a lot of apparent randomness that might well be orderly? And that’s not even counting the mysteriousness produced by chaos and quantum indeterminacy. You just can’t get away from mystery. “Given an arbitrary string of a million digits,” Gleick says, “a mathematician knows that it is almost certainly random, complex, and patternless—but cannot be absolutely sure.” He continues, “A chaotic stream of information may yet hide a simple algorithm. Working backward from the chaos to the algorithm may be impossible” (Kindle Locations 6070-6095). You can’t decompile the program, or unstir the coffee (also from Tom Stoppard).

Gleick discusses compression, which at its heart is a process of finding patterns that can be expressed in fewer bits than the original message. But again, we’re operating on something that is already an abstraction. It’s a photograph, or a digitized sound, or a string of text. So all we’re talking about is human perception and language efficiency. Lossy compression is the key to human consciousness. We can’t deal with the reality all around us, so we filter it. This is old philosophy.

John Archibald Wheeler said “
It from Bit”: “Every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence … from bits” (Kindle Locations 6350-6351). But the bits are answers to yes-no questions. They require the questions in order to have any meaning. So once again, we’re talking not about reality, but about human perception of reality. It’s David Hume all over again.

Finally, at the end of it all, Gleick admits “The birth of information theory came with its ruthless sacrifice of meaning — the very quality that gives information its value and its purpose” (Kindle Locations 7462-7463). Yes! Finally!! So the obvious thing to do at this point is to regain subjectivity. At long last we realize “words are not themselves ideas, but merely strings of ink marks; we see that sounds are nothing more than waves. In a modern age without an Author looking down on us from heaven, language is not a thing of definite certainty, but infinite possibility; without the comforting illusion of meaningful order we have no choice but to stare into the face of meaningless disorder…” (Kindle Locations 7505-7507). And make our own meaning.

Gleick: Information = Entropy

Finished The Information. Interesting, although I thought he took an awfully circuitous route to the conclusion that information is not meaning. This resulted in me writing a lot of notes along the way that he answered in the final chapter and epilogue.

A partial list of the significant passages (part 1 of 2):

“A binary choice, something or nothing: the fire signal meant something, which, just this once, meant ‘Troy has fallen.’ To transmit this one bit required immense planning, labor, watchfulness, and firewood.” (Gleick, James (2011-03-01). The Information: A History, a Theory, a Flood (Kindle Locations 290-292). Random House, Inc.. Kindle Edition.) This was the first clue that messages and meaning were related, but not identical.

Gleick writes about African talking drums, whose users add little descriptive phrases to words, creating a poetic-sounding message reminiscent of epic oral poetry like Homer. The point of this practice is to overcome the ambiguity of words that sound the same on the drums. Gleick calls this error-correction, an example of “redundancy overcoming ambiguity” (Kindle Location 443). In Homer’s case, the purpose was mnemonic, but also perhaps related to the ephemeral nature of the spoken word. “The sea” is over too quickly. “The wine-dark sea” hangs in the air a little longer, allowing the hearer to spend a little longer thinking about it, visualizing it, absorbing its significance. In spoken storytelling, what better way to indicate emphasis than the
time devoted to a thing? Note to self: this is probably a good rule for online or even print storytelling, too.

Gleick says “John Carrington,” who wrote
The Talking Drums of Africa in 1949, “came across a mathematical way to understand this point [redundancy]. A paper by a Bell Labs telephone engineer, Ralph Hartley, even had a relevant-looking formula: H = n log s, where H is the amount of information, n is the number of symbols in the message, and s is the number of symbols available in the language” (Kindle Locations 463-465). I thought it was interesting that Carrington was aware of the Bell Labs publications — it would be interesting to trace the early movement of these ideas, since presumably Carrington was living in the African bush when he read Hartley’s article.

Plato objected to writing, even as he was recording the dialogues of his mentor Socrates, because he believed “this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom” (Kindle Locations 542-544). This objection, of course, has been made to every improvement in technology between writing and Twitter. Probably not without some truth, but apparently not as disastrously as predicted. But once again, the danger is Sauron’s ring and the Sandman’s ruby: too much power invested in the tool renders the user powerless if the tool is lost.

“Writing,” Gleick says, “appeared to draw knowledge away from the person, to place their memories in storage. It also separated the speaker from the listener, by so many miles or years” (Kindle Locations 548-549). It also seemed to depersonalize the information and invest it with an aura of truth, because it upset the power balance between speaker and listener. Face to face, people took turns, and power passed easily from one speaker to the next (Piggy’s got the Conch). A text creates the appearance of authority (although it’s interesting to recall how this tendency is periodically subverted throughout history, as disruptive technologies like the press and the web allow many new voices to be heard).

“The alphabet was invented only once. All known alphabets, used today or found buried on tablets and stone, descend from the same original ancestor, which arose near the eastern littoral of the Mediterranean Sea, sometime not much before 1500 BCE…”(Kindle Locations 594-596). This is so epic, so
Snow Crash.

Gleick talks about how the early philosophers like Aristotle had to define
everything, even things as simple as beginnings, middles, and ends. But this only seems strange until he points out that these “are statements not about experience but about the uses of language to structure experience” (Kindle Locations 645-646). In real life, we don’t experience things this way. But if we’re from a literate culture, we automatically, almost subconsciously understand things this way. This is about as close as Gleick gets to postmodernism, but the reader can easily make the jump from here.

When Plato says “The multitude cannot accept the idea of beauty in itself rather than many beautiful things, nor anything conceived in its essence instead of the many specific things. Thus the multitude cannot be philosophic,” Gleick suggests that “for ‘the multitude’ we may understand ‘the preliterate.’ They ‘lose themselves and wander amid the multiplicities of multifarious things,’ declared Plato, looking back on the oral culture that still surrounded him. They ‘have no vivid pattern in their souls’” (Kindle Locations 649-654). This is a really interesting, helpful way to understand (historicize?) Platonism. And it focuses my attention on my own writing. How much is my storytelling a straightforward process similar to what might have been done in an oral tradition? How much is it a highly structured form, that depends on my culture? The post-modern challenge is inevitable, as soon as we become literate...

In retrospect, the most valuable material in
The Information may be in these initial chapters that force us to reconsider how our very thinking is conditioned by literacy. For example, “Logic might be imagined to exist independent of writing—syllogisms can be spoken as well as written—but it did not. Speech is too fleeting to allow for analysis. Logic descended from the written word, in Greece as well as India and China, where it developed independently” (Kindle Locations 672-674). This is remarkable, if only for pushing us to imagine thinking in a preliterate culture (which Gleick stresses is a lot different from simply being illiterate in a literate culture). Is there a similar change (if not on such an epic scale) in thinking happening in the “omniscient” wired world Gleick takes us to in the later chapters?

“Is it a fact—or have I dreamt it—that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence! Or, shall we say, it is itself a thought, nothing but thought, and no longer the substance which we deemed it! —Nathaniel Hawthorne (1851)” (Kindle Locations 2227-2231).
1851!

I’ve already commented on the liar’s paradox and other math issues, so I’ll bypass them here, except to say that it’s a typical problem for people who equate message with meaning. “This statement is false” isn’t “meta-language,” it’s a mis-definition. It conflates (and thus confuses) the “statement” from the statement’s content. The content could be evaluated for truth or falsehood. The statement itself should only be judged on syntax.

A related problem, which I realize Gleick is trying to address slowly in order to bring his readers through a process of discovery, is the idea that “given any number, following the rules produces the corresponding formula” (Kindle Location 3257). The problem is that often processes can’t be reversed. Gleick knows this of course (see his bestseller
Chaos, and the epigram heading a later chapter: “You Cannot Stir Things Apart”). But I think it’s significant that the issue is not confined to the real world. C programs cannot be easily decompiled.

Information is entropy. This is the core idea of the book. There is some type of relationship between the world of things and the world of ideas. Gleick takes us through thermodynamics (mostly pre-chaos), touches on quantum mechanics, and hammers on Wheeler’s epigram “It from Bit.” Again, many of the things I found interesting related to the difference between information and meaning: “If English is 75 percent redundant, then a thousand-letter message in English carries only 25 percent as much information as one thousand letters chosen at random. Paradoxical though it sounded, random messages carry more information” (Kindle Locations 4071-4073). This illustrates the more-or-less inverse relationship between info and meaning (which Gleick returns to, at the end). Even the oracular passage from Matthew, “Let your communication be, Yea, yea; Nay, nay: for whatsoever is more than these cometh of evil,”
assumes the presence of a questioner. The yeas and nays are only meaningful in the presence of questions, and it’s only that context that gives them meaning. (Kindle Location 4643).

“Thought interferes with the probability of events, and, in the long run therefore, with entropy. —David L. Watson (1930)” (Kindle Locations 4747-4749). Another of the many quotable passages. But the interesting question is, what types of thought, if any, decrease local entropy? The Shannon-Wiener disagreement over whether information constituted entropy or neg-entropy (because it can produce order) also depends on human perspectives on “order.” In the end, entropy produces universal order, but not of a variety we’d appreciate.

End of Part One.