mardi 28 janvier 2020

Biosphere routines, feedbacks

I always feel slightly at odds with the term "climate change." Deep political stasis needed shattering with a slogan, and the phrase filled in. We had to unify and focus, pack up our smorgasbord of ecological concerns and send a single must-have item home. The old terrestrial alarms of the 19th and 20th centuries ended up putting decent people off. Numbing them. Too often our best information was seen as alarmism. It wasn't seen as fire alarms or canaries in coal mines, as notices we should at least hear and preferably investigate to see if we can help. As in all eras, some concerns were exaggerated, or turned out to be failed best guesses from partial facts, but so many weren't.

The stasis isn't unlike finding yourself saying "everything causes cancer!" to dismiss a new snippet on how you might get or avoid cancer. The rational action, of course, is to listen to the piece of information. It could save your life. (And it could save someone else's if you share.) If you believe that thinking about cancer will spoil your fun while you smoke barbecued lamb or spray paint a car door in an unventilated basement, I'll tell you what'll spoil your fun: dying unexpectedly. Talk about exaggerated concerns! It is untrue that everything causes cancer. It is true, however, that the dose makes the poison. So if you were rational, you wouldn't say "everything causes cancer!" You would listen and then try to glean what evidence there is and what dose of this factor could be threatening. Even too much water will poison you, but if a companion points out that the wild berries you're eating are poisonous, good luck with the reasoning that "everything is poisonous!"

"Everything's bad for the environment!" Are we on the same page about the difficulty we Earth aliens are having when we think about this? It's a lot easier to ruffle your flightless wings and stick your head in some mud. It's more difficult to think, "Wait a minute. Many things are bad for the environment. It isn't just one. But, hm, not everything is bad for the environment. And not everything is equally bad. Hm. So what are the worst things? And what are the best things? Where can we start today?"

It isn't actually that hard. It's just harder. And many people out there are setting the easy example of head-under-quicksand-it's-really-great-try-it. Humans love to imitate each other.

Climate change has helped focus our concerns. Most of us know it isn't just global warming from CO2 that's looming on our radar. At the same time, I worry that "climate change" is both too vague and too specific. If someone doesn't feel as if a changing climate sounds all that bad—after all, we've lived through ice ages and hot spells, and plants will enjoy some extra CO2—then "climate change" is just vague enough and just specific enough that this person may feel excused from considering any of the problems at all.

This leads to the question of whether we face one problem or many unconnected problems. And while there are many problems, I believe they are connected. These are dangers to the biosphere introduced by inefficiencies and inadequacies in the world's political and economic practices. In my opinion, we have plenty of information about how to upgrade those practices. The trouble is that it's happening so much more slowly than the speed of knowledge and understanding.

vendredi 24 janvier 2020

Sneak

Today I want to talk about Shakespeare. And not the stuff everyone always talks about—he was an actor, a playwright, a poet, likely gay, all actors at the time were men, he invented many words and phrases we still use, he was "the greatest," etc. I want to talk about three other things, one mentioned slightly less often, one mentioned rather less often than that, and one basically never mentioned. But I don't want to talk a lot. (Edit: too late.) I want to put the thoughts on the table for later.

One, mentioned slightly less often. Plays circa 1600 were still considered just about the lowest of low culture, and actors were looked down on or even despised. They were seen as riffraff. You know how some adults see young skateboarders in motley clothes as deficient human beings who should be cleared away? "NO SKATEBOARDING," don't you see the sign? Wait, why isn't there a sign! [Marches off to yell at some official.] In 1572, the British Parliament officially called actors "vagabonds and sturdy beggars." There were laws against hanging out in the streets while also having acting as a profession. Peculiarly like that sign against the skateboarders, right? There were guilds for other professions, the respectable ones, but unlike today, there was no Screen Actor's Guild protecting actors' human rights and wages.

Okay, I admit I'm exaggerating slightly. It had been rough, it had been like that, but things were changing, slowly, at least for several of the very best troupes. Shakespeare's troupe would perform for King James sometimes, and this association with royalty gave them some protection and extra funding. They would constantly perform for the public in open-air theaters, people of all classes, and not just in London, but also in the countryside. We have to remember the then-recent history of theater, which was still in evidence all around the country: actors are scum, plays are trash. Today Shakespeare is often seen as high culture, such high culture that we expect to feel dumb trying to decipher his torrent of metaphors, so much that if you merely claim that your movie or novel is based on Shakespeare, that immediately makes you seem legit.

This all very loosely and associatively reminds me of "Mad King Ludwig II" of Bavaria. Ludwig was called insane by his doctors—quite suspect doctors who happened to have been bribed by his enemies to find something wrong with him. What ensued from their examinations was 19th century fake news. Their critical evaluations of his psyche were splashed over the newspaper for everyone to read. In our era, that would already be seen as a terrible breach of privacy for a psychiatrist, but worse still was the diagnosis. Actually, Ludwig was just gay—hardly a mental illness, but sadly that isn't how people saw it back then. As I mention above, Shakespeare was probably gay or bisexual, though he married and had three children. How this might have interacted with his Catholicism, which was another taboo in early Church of England times, is a fascinating question, and who knows, might have something to do with the disappearance of all his books, manuscripts, diaries, etc.

Anyway, this young monarch of Bavaria was also popularly trashed for his building projects, which almost everyone considered a colossal waste of money. To be fair, the kingdom of Bavaria was not doing so well, and more money would certainly have been appreciated. But also to be fair, these were not state funds he was exhausting, but the family's funds. Now fast-forward a bit. His castles bring more money into Bavaria today, thanks to tourism, than any other industry there. And here's a commercial trifle many people care about: Neuschwanstein, his unfinished chef d'oevre on a mountaintop, gives us the image of a fairytale castle, the one lifted directly into Disney's logo. Our so-called mad king built the Disney castle before there was a Walt Disney alive to copy it. (If you're ever in that part of Bavaria and you haven't been up that mountain, I highly recommend the visit. You will not regret it.)

While King Ludwig may not have been a very good king—he wasn't even interested in being a king—his unparalleled vision probably mists into your mind when you think "fairytale castle." Today, we would likely consider Ludwig's extravagant construction projects not so much a waste of money, but rather a series of moon shot speculations that turned out unexpectedly well. Ludwig was not only insulted and smeared for turning "insane" enough to love men, interior decorating, and co-designing beautiful architecture. Tragically, he died young by assassination. It seems obvious that he was an artist born into the wrong profession: king.

Their biggest commonality is that the enduring value of their work was not recognized, and they had to contend with fairly brutal prejudices against their sexual orientation and even talents and religion (in Shakespeare's case).

Two, mentioned rather less often than that. Shakespeare and company were using a new technology. There has to be some reason we're virtually unaware of any plays after the ones by Sophocles in ancient Greece (most of us), and then suddenly we're aware of this Shakespeare bloke from 2000 years later. Does it strike you as odd, on reflection? What happened to all the other playwrights in that time? Were there any?

Yes, of course. Something new was happening that begins to explain this misalignment, and I mean something other than the printing press. Remember, there was no printing press in ancient Greece, and some of Shakespeare's plays were likely written out from memory by his fellow actors years after his death, so "lack of a printing press" doesn't explain the 2000 year gap at all.

Curious yet? Ok, what was changing was that for the first time in history, building technology was good enough that you could actually make a theater that was difficult to sneak into without paying. A few guards couldn't stop a crowd before. For the first time, you could charge everyone a price of admission, like at the movie theater. That's so standard for us today that we don't stop to wonder whether it began at some particular moment in time. It began in Shakespeare's day. Ticket prices were low—I've heard one penny—so basically anyone could attend, and yet barring people who didn't pay—to encourage payment—allowed at least the very best troupes to make a steady living. Few respected the actors or the art form, but if you were Shakespeare enough, you could be well-off from ticket sales. This here, my friendly readers, is a very early example of middle-class culture. It wasn't just kings paying for private entertainment or free religious/moral tales for everyone. Shakespeare was something like a Kickstarter baby. His generation of playwrights was more crowd-funded than any of its predecessors. (At least, this is what I read in a scholarly introduction to Antony and Cleopatra written by the editor, A.R. Braumuller. It's called "The Theatrical World," and it appears at the beginning of every book in the Pelican Shakespeare series.)

One other shift was that though the Catholic Church had long suppressed theater, which probably explains some of the big gap, England had split from the Catholic Church in 1534 with the foundation of the Church of England. I have no idea whether that was a factor, but it seems possible.

Three, one basically never mentioned. We give credit where it's due for words and phrases that come to us from Shakespeare. There are hundreds, thousands. But you know what's strange? When you compare Shakespeare to other top playwrights from his time, he actually invented (or recorded before anyone else) fewer words than they did. Not bad for a buncha riffraff, huh? It's just that his plays were so impactful that the words and phrases he did invent changed the whole language. Many consider his writing the beginning of modern English.

So—we especially remembered his inventions, even though there were fewer of them. Why?

There's a distant parallel in the works of the "Italians" Petrarch and Dante. I use quotation marks, because there was no Italian nation at the time, even no Italian language at the time. Those two poets spliced together the local languages, the bits they liked—kind of a "greatest hits" of ways people spoke around the peninsula—in their writings, synthesizing what eventually became modern Italian. They weren't the only good writers. But what they wrote stuck so much it defined a language, this time a language where no single language had existed before the publications. This all circa 1300, when, as I say, there was no such language as "Italian." In 1850, there still was no such language as "Italian," but Petrarch and Dante spoke up from 500 years before. Their works had gotten so much traction over the centuries that they were instrumental in standardizing a language for the new nation.

Incidentally, almost a total tangent. This "invention of a national language" is why Italian spelling is so consistent and intuitive that Italians don't even really use a word for "spelling," they just say "writing." The language "began" recently enough that the spellings haven't accumulated baggage from old pronunciation patterns. Like, "through" and "who" have baggage from ancient pronunciation patterns. Same with French words like "ancêtres," where the accent on ê denotes that the word used to be "ancestres," but the middle s is no longer pronounced. Also the final e is mostly silent, the s at the end is silent, and the n is not really pronounced anymore either, though it influences the a somewhat. None of that guff in Italian. To a native English speaker, spelling in Italian feels like cheating.

We had Chaucer and Shakespeare and then centuries of language evolution, and only then some modest attempts at standardizing and simplifying spelling, which had to honor the old and the new and countless works in between. With the new country of Italy, they just updated Petrarch and Dante and taught that to everyone. ("Just" is an atrocious disservice to how long this took and how much effort, given that only 2% of the peninsula spoke the language when the nation formed in 1861. But it sounds smoother, ha!)

Another similarly tongue-defining pair of works is the Tanakh/Old Testament (for Hebrew) and the King James Bible (for English, in fact contemporary with Shakespeare, and some believe he was on the writing team). Both have been widely read for so many generations that they've played a major role in defining and preserving the respective languages. We would have a language, but without a big culture of widely admired examples of how to express ourselves, the language would have drifted faster, and "English" words from 1600 would probably be totally incomprehensible by now. In a sense, the best writing almost conspires to keep itself remembered by sticking pins in the language. It says, "Slow down! Keep speaking my language my way so you can remember me!"

Think about this: if a video game console came out without any games, or with only completely bug-ridden, slapdash, unplayable games, then no one would bother to write any emulators for the console. No one would bootleg the games. No one would write games in the same style. Soon, no one would remember the console or the games. If there were any unique interaction idioms in those games, they'd be forgotten, and if you were to encounter them, they'd make no sense to you. This is something I like to investigate... I've toyed with many old classics whose interaction innovations simply didn't catch on, and though I find the awkwardness lovable, sometimes it's even more incomprehensible today than it was back then.

Do you remember VTech Socrates? You don't. It was my first and only game console as a kid. And it was decently educational, and the music settled so deep in my brain that when I finally heard it again recently on YouTube, for the first time since I was probably 8, it was as if I'd heard it a week ago. But there were almost no releases for Socrates, just the math/spelling/music skills cartridge that came with it and one other that could be bought. Yours truly should be the prime candidate for a person who would know what that other game was, yet I have no clue. The company went out of business quickly. You've never heard of it. Right? There is no VTech Socrates emulator. The games don't look like games from any other system.

Meanwhile so many people love Super Mario Bros (I'm not particularly a fan but I respect it), it's easy to get a Nintendo emulator. Because it's easy to get a Nintendo emulator, it's also easy to run any of the other games that came out for that system. Other games were very popular too, and keeping a clutch of these popular games alive means that less successful ones for the same console stay more hydrated and whole than they would have been otherwise. Because the "language" of that console, its style, its controllers, its interaction idioms, remains a prevalent influence, no one is going to forget how to run, play, or make any of those games very soon. That means someone who grows up playing only the newest games today will still be able to understand the old Super Mario Bros easily. The influence on the present keeps the past available. Looked at a certain way, it almost feels like a conspiracy. With religion, it feels even closer to conspiracy. But I'm just being colorful.

So yes, Shakespeare was a wordsmith and a gifted one, but his language is not more complicated than the language of his rivals (less complicated, actually), and he invented fewer words. But I suspect we remember his words (or the words of his troupe if you see the authorship as communal, and it was—plays were often written collaboratively in taverns) simply because the plays are better overall. Then again, I've never read or seen one of these rival plays by the likes of Christopher Marlowe and Ben Johnson, so I don't have more than a suspicion. Some of the rivals and rival works were more popular than Shakespeare at the time. Would I recognize the words they coined? I don't know.

For one argument that could back the "better overall" claim up, it's often said that Hamlet was the first "modern" character, the first character with so complex a personality, with realistic enough introspection and inner conflict that we don't see him as phony, abstract, stilted. The earliest high-water mark for lifelike character writing that's agreed on is Odysseus. But there's a big jump, both in time and in realism. Hamlet is kind of the Mona Lisa moment in character creation and development. All other characters before that just seem less real. Or so it's said in English departments! If that isn't a misconception, I can believe that this special incantation—the lifelike figment, the entirely believable ghost of a human machine—would be more memorable than inventing words or writing rococo sentences. Maybe "good characters" is the best explanation.

jeudi 23 janvier 2020

Certain questions

Most people judge a person's claims and opinions by that person's confidence. We don't always realize we are doing this. We may explain it to ourselves another way; it has the status of an unconscious bias.

For perspective, early experiments with electrical stimulation of living brain tissue during surgery began to reveal how we rationalize processes inside us that we don't understand. An electric current sent to one part of a person's brain would reliably make that person laugh. When asked why, the person would say the doctor's face was funny. Clearly that was not the actual reason.

It's the same with other, more naturally present brain processes. Since these experiments, researchers have found much evidence that the reasons we give for our opinions and actions are unreliable. We aren't being dishonest. (There can be a difference between honesty and truth.) We explain why we think we like this product or distrust this person, but often the evidence strongly suggests that our minds are fabricating an explanation so that we can feel whole and coherent. Often the actual reasons are much more basic.

This is why a little subtle dishonesty can be very persuasive. Our minds are already at a disadvantage in that our beliefs about why we choose this or that are often mistaken or inaccurate. Malicious actors can exploit this.

The phrase "con artist," after all, comes from "confidence man," someone who would use trickery to get taken into your confidence. Confidence is a key weapon in the confidence man's arsenal. Our country is presently run by a confidence man. His tactics aren't even a secret. Yet they still work on tens of millions of people who have heard that he's a habitual liar and a con artist. They believe that they're laughing because the doctor is making a funny face, not because of the brain surgery they know they're in, but their minds are being hacked.

For the reasons I've just outlined, we should be careful not only against knowingly accepting confidence as the reason we believe something—or someone—but also against unknowingly accepting confidence as our reason for believing.

I'll give you a better way.

First, listen not for what a person believes, when they speak, or how strongly, but for why. Why does the person say this? If they are biased or trying to trick you, there will be problems with their reasoning. Whether they know there are problems with their reasoning or not, they will try to distract you from talking about those problems openly and civilly. They will try to make this about you and them, for example, by guilt-tripping you or making you feel proud, instead of giving bulletproof reasoning or listening to questions with a good attitude, thinking about them, and being grateful to the people who bring them up.

It's like isolating two suspects at the station and questioning them separately until one says something inconsistent with the other's assertions. Here, though, you are not looking to make sure every thought coming from a person is consistent with every other. That worry is a red herring, one good liars know how to exploit. "A foolish consistency is the hobgoblin of little minds," remarked Ralph Waldo Emerson in his landmark essay, "Self-Reliance." Don't be afraid to disagree with what you've said before, or to observe two facts that seem contradictory, or even a hundred facts that seem contradictory. And let others do the same. It's smarter! It's kinder! It leads to discoveries! Eventually, even to agreement and consistency! See, you can find possible inconsistencies even in math proofs that are valid and known to work. Preoccupation with enforcing this unrealistic standard outside of math, or else with attaining an appearance of perfection outside of art, is following or planting a red herring. Let people be inconsistent; you are.

Try this instead. You're looking for how they support their beliefs and claims with evidence and logic. We may have a veil over our mind's inner workings and be deluded as to why we choose A over B, but logic either works or doesn't. Evidence is either accurate or inaccurate. There are shades of evidence from "none" to "extensive," yes, but the physical world does not change to match a wrong view; it stays the same. It's the view's responsibility to match the physical world and its potentials, not the world's responsibility to be how a confident person says. This is how we begin to extract ourselves from the problems I've outlined above. Don't make it personal, just look for why. What is the person's reasoning? What is the evidence?

Now comes the most important part.

How does the person handle being disagreed with?

To the trained eye, if you have no better source of information right now, this can sometimes tell you everything you need to know.

It's a window on how they reached their opinion in the first place.

If their first response is to undermine the person disagreeing by trying to make them look incompetent, stupid, or just plain bad, then that's a giant red flag. They are not relying on reason. They are making this personal. They are focusing on optics and its cousin side-taking rather than on facts and the tissue of reality and uncertainty that connects them.

A mostly accurate view can reflect reality quite well despite containing inconsistencies and paradoxes. The entire field of science is not invalidated every time a discovery resolves a point of confusion or rewrites part of a textbook. What picture in a few words can capture all the nuances and smooth out all the sharp edges? I know of no such picture, do you? A con artist will try to erase or deny these. An intellectually honest person, by contrast, will want to think about potential paradoxes and data gaps and talk about them and find out more.

It is not confidence to look for, but this kind of adventurousness. That is, if you really lack time, energy, or skill for a better, more thorough, more evidence-based approach.

A person who cannot admit uncertainty is less reliable than anyone else, when it comes to ultimate believability. No one is truly certain, 100%, about anything. Whoever claims to be is lying, often unknowingly. Knowledge in large part comes down to how we find, respond to, and manage uncertainty. If we decline to look for it, deny it when it's mentioned, and manage it by believing it doesn't exist, then we are con artists. Perhaps we are not malicious. Very likely we are not. Perhaps we do not even know we are con artists. Perhaps we are unknowingly conning ourselves for lack of training in a better way. Perhaps it would be more accurate and kinder to say we are unwittingly allowing our instincts and biases to con us. But if we are allowing our instincts and biases to con us, then we are opening ourselves up to attack by intentional con artists. And we are also magnets for all kinds of wrong ideas that people naively hold with great confidence, great, infectious, influential confidence.

Confidence is not bad. Do not take me the wrong way. Having an even keel of self-confidence and self-efficacy allows us to hear criticism and accept that we are imperfect and try again and improve. Healthy confidence is critical to living, learning, even listening. No one is always at exactly the same level of confidence, let alone entirely confident or entirely unconfident. It is a useful, an unavoidable and necessary, inner emotional signal, in its shades of presence and absence. But it isn't a reason to believe something. Confidence isn't logic. Confidence isn't evidence. And it isn't reliable.

Think of confidence as a quick sketch—a one-line drawing that may have artistic merit—rather than as the thing that's being drawn. You can admire that one-line drawing. You can feel it was made by a master. You can wish you had that vivacious boldness. You can feel inspired, comforted. You can imagine what the one-line drawing represents, even in great detail, if you want to make enough effort. But now do something for me, and more importantly for you and for people you know who listen to you. Put down the one-line drawing you've admired, and take a long, inquisitive, skeptical, deep look at the thing the sketch was a sketch of.

Forget the one-line drawing. What do you see?

Congratulations. Now you know how to do better.

dimanche 12 janvier 2020

Inner musical rhyme

It's funny, for some reason Western music mostly shuns the core Arabic-sounding (Gypsy/Jewish/Byzantine/Indian/etc) scale. All these different cultures around the world have found it. Occasionally classical or pop music deigns to find a chair for it, but we treat it as somehow pathological. Somehow we ought not to use it, for reasons I've never understood.

And I partly understand. It's one of the most difficult common scales to put in harmony. It just doesn't work with classic notions of chord and cadence. And that's probably related to the way traditions that prefer this scale tend not to be big on chords. Though I haven't done any sort of objective analysis, it seems that in monophonic, multitimbral traditions (one note at a time but multiple, possibly different instruments or singers sharing that note), it's often preferred. It makes for exciting melodies. If you want polyphonic harmony, in other words chords that move, you'll probably have to break out of the scale often.

But I think if you look around the world, there are 3 most common scales: major, pentatonic‡, and the Byzantine/Jewish/Arabian/etc one I'm lavishing with attention today. Cultures have discovered and rediscovered these scales ad infinitum because there are fundamental reasons the ear is drawn to them most.

‡ (The major pentatonic most of all for pentatonics, pentatonic meaning 5 notes in the scale, but I think there are reasons to emphasize the set of notes as one united scale even more than it is five modes, given that you basically can't play anything that sounds bad or off—try it! Sit at a piano and play the black keys, and tell me if you can really go wrong. It doesn't matter where your bass note is. Plonk on the black keys, and it'll sound good. This is the most universal core of music around the world. Major pentatonic, minor pentatonic, mixolydian position pentatonic, etc, whichever note in the pattern you start on, people around the world love it.)

What I've just realized about the Byzantine scale (in Western music it's called the Double Harmonic Major scale, but who cares, that makes it sound so specialized when it isn't at all) is that it's not only symmetric (I did know that before), the only perfectly symmetric scale in 7 notes (hm, I should check this claim for its conditions), but it's actually symmetric within its two halves. It's the most fractal-looking scale I've ever seen.

You can see what I mean... if 1 means the note is on, and 0 means it's off, here is the pattern of every regular Major key:

101011 101011

(If we're in C Major, this means there's a C in our scale, there isn't a C#, there is a D, there isn't a D#, there is an E, etc.)

You can see it's got a nice sort of symmetry, not a mirror, but repetition.

101011 <-> 101011

Here's Phrygian, ¶the¶ Flamenco scale:

110101 110101

¶ (Not exactly true. Flamenco music uses other scales also. But I think Phrygian most immediately harkens to what's different about Flamenco and Spanish music overall, for most people, in the lowered second note. It's a convenient and practical jumping-off point for Minor, Phrygian Dominant, Double Harmonic Major, and from there even the latter's popular mode Double Harmonic Minor. Also, I'm not concerned about whether we call Phrygian a scale or a mode, for sticklers out there, as we're comparing chromatic patterns with fixed beginnings and endings.)

It's the Major scale, flipped. Neato, right?

101011 101011 Major

110101 110101 Phrygian

Major sounds like it wants to rise and keep rising. Phrygian sounds like it wants to fall and stop. They're perfect opposites.

-

If you're unfamiliar with scales and modes, these 2 patterns live in a family of 7 connected musical patterns called the modes of the diatonic scale (diatonic meaning 7 notes). They are also called the Church modes, because they go back past Renaissance music to Gregorian chant. And even earlier than that: they go beyond the first musical notation, so we don't know when they started. (If familiar with modes, you can skip to the next section.) What I'm calling a "family" of modes (such a family is called a scale, and this 7-note/diatonic scale is the most standard scale around the world, at that), is kind of like a large piece of fabric, and the modes are like sections of it. Maybe we can pick on wallpaper—imagine some kind of tartan wallpaper in a big roll, if you like. This big piece of tissue, the common musical scale, is based on the idea of harmonics, which are whole-number multiples of a core frequency. It isn't critical here, but harmonics are essentially multipliers for sounds. Anyway to make a very long story short, this scale, this big "fabric" or "tartan wallpaper roll" is just the white keys on a piano.

In your mind, picture a rectangle of that striped wallpaper, and focus on the edges, which will look different depending on where you cut. If a scale is a big roll of wallpaper, a mode is a specific section cut from it, the stripes in the pattern situated wherever they are as a result of the cut at the beginning and the cut at the end of the section. Actually, the cut at the end occurs at the same point in the pattern as the cut at the beginning, the first time the pattern begins to repeat. That's always true with modes. And to go back and use my other metaphor, if the scale is a landscape, its 7 modes are like 7 big, overlapping photos of that landscape, or better still, 7 full panoramas with different center points. All this is most easily understood with your ears, by just playing the regular letter notes on a piano (white keys) or any other instrument: A, B, C, D, E, F, G. Hey, no sharps or flats yet! Now for the exciting bit. Starting on each letter and playing notes around it, you'll get a slightly different vibe. Actually, these vibes can feel very different from each other. There are 7 of those modes, one centered on each letter. They're like 7 windows looking out of the same castle on the same heathery, windswept valley, but focused on different parts of it with different emotions coming in through each window. There's the A window, the B window, the C window...

Major is the vibe you get when you start on C and play the white keys up or down to the next C. To be candid, just play anywhere, but always return to a C as the central station, the "home base" sound. Keep hitting C a lot, and also explore and play anything else, but just the white keys for now. So, whether it sounds lovely or excruciating, that's Major (C Major in particular, but I'll get to that). Phrygian, meanwhile, is the vibe you get from starting on E and playing the white keys up or down to the next E. And just like before, play all around, but only the white keys for now and keep returning to E as "home." Wherever you go every day, you start at home and end up at home, right? Travel the same way with music. If you don't return home, you'll feel lost or strange. Music uses that trick also, but most of the time, most of us return home at the end of the day, and the same is true in music. (Also, I know "only white keys" really sounds like some kind of racism, ha, and I'm sorry about that. There are pianos available that reverse the colors... though they tend to be more expensive because they look super rad and are in demand. Whichever palette your keyboard uses, interesting music tends to break out of this "only white keys" or "only black keys" thing.) Now, so far our keyboard strolling has demonstrated 2 of the 7 modes. The other 5 modes come from starting on F (Lydian), G (Mixolydian), A (Minor), B (Locrian), and D (Dorian).

There's much more, though, and this is just the tip of the iceberg of scale patterns. You can slide any pattern to start on any key of a piano, or on any frequency in the whole sound spectrum, including frequencies too high or low for a piano, or else drifting somewhere between the keys of a piano. For all starting points other than the mode's home letter on a piano, maintaining that mode's pattern or vibe will require black keys; or if we're between keys on the piano, we'll need more between-keys; if we're dealing with a musical staff and written notes, it'll require some sharps or flats. While C Major is all the white keys from C to C, F Major is almost all the letter notes from F to F, except not quite, because there will be no B key, but instead the B flat key to the left of it. And while E Phrygian is all the white keys from E to E, C Phrygian will have 4 black keys and only 3 white keys. The same goes for starting any of the modes on another letter. We break out of the white keys somewhere (or the black keys if the piano has reversed colors).

Note this is an artifact of piano design, not of sound itself. If you learn to sing and never look at a piano, there's no difference between E Phrygian and C Phrygian except where your voice is. The audio spectrum does not "think" or "feel" in terms of black keys or white keys, sharps or flats. Those are human constructs. On a guitar, sharp notes, flat notes, and natural notes look identical, because they are. A note is a frequency. Two notes are a pair of frequencies. Whether that pair sounds sharp, flat, or in tune depends on what you're used to hearing and what you're expecting, not actually on the frequencies themselves, not in any way that systematically matches the whole "sharps and flats" thing. Which isn't to say there's no reason for it, but sometimes we need to step back and look at ground truth.

One more point about naming. Often modes themselves are just called "scales," which is confusing. Basically all these patterns are scales, but "mode" is a special meaning of "scale" that refers to the (sibling) patterns you get from starting at different points in the same (parent) pattern. Let's pretend for a second, for illustration, that music is just numbers, not notes, not even sound. You could say the numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2 ... are a scale, because, as you'll notice, it's a repeating pattern. If you started say on 4, that would create a different mode of the same scale: 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7 ... But since it's also a repeating pattern, it's also correct to call it a scale. It's just a scale that we're thinking of as a mode of another scale.

Minor mode (also called Aeolian):

A, B, C, D, E, F, G, A

Major mode (also called Ionian):

C, D, E, F, G, A, B, C

Phrygian mode:

E, F, G, A, B, C, D, E

And so on, for all 7 letters. The letters are shared, so it's all the same larger scale.

The spacing between the letters is not the same, so these modes have different sounds, feelings, atmospheres.

-

Back to the relationship between these two modes I was showing you, now that we've got a little background.

101011 101011 Major

110101 110101 Phrygian

Major is the second-brightest of the 7 diatonic modes (just below Lydian, the pattern starting on F). Phrygian is the second-darkest of them (just above Locrian, the pattern starting on B). You can see there's a kind of mirror of symmetry in between them, I hope. If so, that isn't your imagination. That isn't just a human construct. That's a real symmetry.

But here's the Byzantine scale:

110011 110011

It's symmetric, then it's symmetric inside each half. That's got to be related to why it's so goddamn compelling.

You can't get Byzantine by playing only white keys on a piano, unlike the others I've been talking about.

Also notice that you can get to it in two steps from the previous scale above it, Phrygian, and those two steps are actually the same step reflected in each half of the scale:

110101 110101

110011 110101

110011 110011

Traditional Spanish music just loves to navigate these three scales. They are called the Phrygian mode of the diatonic scale, the Phrygian dominant scale, and the Double Harmonic major scale, respectively, if we want to be "correct" (whatever that really means) and use the conventional names in English, down to which letters are capitalized. Personally, I call them Phrygian, Spanish Phrygian, and Byzantine. Works for me.

110101 110101 (Phrygian mode of the diatonic scale / Phrygian)

110011 110101 (Phrygian dominant / Spanish Phrygian)

110011 110011 (Double Harmonic major / Byzantine)

The last one is most recognizable as Middle Eastern or Gypsy or just downright foreign and exotic (though, believe me, there are much more foreign sounds out there).

Just in case you didn't believe anything eldritch was going on, let's visit the Hirajōshi scale, a family of pentatonic modes played on the koto, the zither-like national instrument of Japan. This classic Japanese sound is what you get by removing those two notes instead of moving them.

110001 110001 (Hirajōshi scale's corresponding mode, showing the alignment of patterns)

Detail: between the two halves in all the scales above, there's a note that isn't on. I'm using a tradition going back to ancient Greece of dividing scales into two "tetrachords," two groups of four notes, stacked one on the other. For most scales we use, there's one note missing between the two tetrachords. Hence the gap above. As the ancients would see it, we play a tetrachord, a little jump, and then another tetrachord. The jumped-over note is the same note an ambulance siren plays. It doesn't sound nice in many circumstances. (It's often left out in various ways, even though it's actually critical to the push and pull of melodic and harmonic development as for example in the Tristan chord, but that's another story.)

So the Byzantine scale actually looks like this:

1100110110011

And the first 1 and the last 1 are the same letter, the last being exactly one octave higher. For example, if we're in the key of E Byzantine, then the notes will be:

E F G# A B C D# E

By the way, the note skipped over in the middle of E Byzantine is an A#. It isn't part of the scale, but as I said, we shouldn't entirely ignore these notes, which are called tritones. A#, the tritone, is the line of symmetry for this octave. It's the harsh center. Blues loves tritones! And going back some centuries, baroque composers figured out they could use them to change keys seamlessly. (Remember C Major versus F Major? Or E Phrygian versus C Phrygian? That's changing key. It makes an immediately audible difference, and if you aren't careful, it can sound totally wrong and incompetent, an obvious mistake. Then again, are mistakes necessarily bad? Anyway, crafty use of these ignored, underdog, siren-ugly tritones makes this a moot point, because with them you can change to any other key and have it sound right and pretty amazing.) You can grimace at the tritone, you can pretend the tritone doesn't exist, but it's still right there in the middle, and it still has soul.

There's an Indian tradition called Carnatic, which is probably the most complex, nuanced scale system in popular use. It has about 22 notes available (give or take depending on the particular area or tuning approach) instead of 12. Singers work within that system with hundreds of different scales, literally, and they know how to work them accurately.

The first song that's usually learned in the Carnatic tradition uses the Byzantine scale. Of all the hundreds! That strongly suggests it's one of the most compelling scales to the ear. And that's obvious. It just is. But it's maybe the most "scientific" test a culture has done, given that Carnatic has such a high-resolution tuning system, and they could have picked a zillion other places to start.

I would say maybe it's the visual symmetry appealing to new Carnatic practitioners as a mnemonic device, but I doubt that applies when singing, very much. Maybe? Maybe the scale is only so popular around the world because it's somehow easier to remember. But I can also say I've always been very drawn to this scale, and I didn't know about the symmetry for years. And the symmetry within the symmetry, the fractal quality? I didn't know that at all, yet I suspect my ears did all along.

jeudi 9 janvier 2020

Exclusive selection

How are Geiger counters and what makes human brains unique related? There's a much more specific answer than "human brains came up with Geiger counters, and crow brains procrastinated for too long and didn't."

In my post-before-last I made passing mention of a board game called Nim and its role in the history of computing. Actually, Nim is more of a matchstick game than a board game, but that doesn't matter! Nim is normally played with 4 rows of matchsticks, arranged in counts of 1, 3, 5, and 7 (that's a total of 16 matchsticks). Other counts work. And you could use tallies in orange chalk, pennies, donkeys, or whatever. What matters is that we have several different groups of items. Each row is a group. Mathematically, each row is actually a set, in our case a set of distinct matchsticks sharing the same row. Synoptically, Nim is an ancient Chinese game about sets and how many items are in them. But it's more fun to pick up matchsticks than to think of mathematical sets, sizes of sets, etc. (Play it here. The player who picks up the last matchstick loses.)

Ok, but what about the history of computing? On September 24, 1940, Edward Condon, a 38-year-old physicist, patented a machine called the Nimatron that he'd recently presented at the 1939-1940 World's Fair in New York. The showcase at the fair had taken place on May 11, and he'd filed for a patent a couple weeks earlier, on April 26. This machine could play Nim better than most humans. Indeed, it won 9 out of every 10 games.

The idea of robots was not new to fairgoers. One of the attractions was a man dressed up in a metallic (bronze? tough to tell in black and white) suit, talking in staccato fashion about having a mind. (Correction: that "man dressed up" was actually the robot Elektro. It had me fooled!) Other attractions prompted audience discussions about the new, advanced mechanization in factories putting workers out of jobs. At least one person took the other side of the old fence and said what AI researchers like to say today: that the widespread fear of lost jobs had not actually happened, because mechanization changes the nature of our human roles and therefore of our jobs. The debate continues more than 200 years after it started. But all that aside, it's something quite different to walk up to a machine that promptly outsmarts you.

That's what happened for about 90,000 visitors at the fair, out of perhaps 100,000 who played the Nimatron. What's particularly interesting about the patent on this machine is that it's earlier than the first general computer, yet it uses a similar process. Somewhere in the back of Condon's mind, he was aware that more could be done with the gadgetry he was assembling. It could do more than just beat fairgoers at matchsticks (represented by lightbulbs).

A year later, in May, 1941, a machine developed by Konrad Zuse, the Z3, became the first general, digital computer, a contraption broadly equivalent to whatever you're reading this post on, just slower and with less memory. The principle had been worked out, the invention built. And it worked. (I'm simplifying here. Yes it worked, but if Condon didn't know the full capabilities of the process, neither did Zuse. As far as the latter was concerned, this was a very effective calculator. No one knew his Z3 was a computer in the proper sense until work in the 90s proved it. However, this began a lifelong career in computer science and engineering and manufacturing. He did knowingly build computers later. He was the first. He cracked it.) Let's return to Zuse's computer in a moment, though. After all, the Nimatron had in some ways beaten it to the punch.

One thing that made Nim particularly suited to such early development is that the winning strategy relies on binary numbers. It's actually quite easy, if you know how to count in binary.

1) Convert every row's matchstick count into the equivalent binary number.

2) Cross off pairs of identical digits. That is, with these binary numbers stacked up and alined on their right sides, just as in grade-school addition, go down each column and cross off any pairs of 1s that you find. For example, if there are three 1s in the rightmost column, cross off two of them, and bring the last down, writing 1 under the column. If there were five, you'd cross of two pairs, or four of them. You'd also write 1 underneath. If there are four, cross off all four. Write 0 underneath. Everything canceled this time. If there's one, don't cross off any. Write 1 underneath. This process will result in a new binary number made of the straggler 1s for columns that weren't completely canceled out. At most a single 1 will be left for each column. Like with grade-school addition, each column represents a specific place value, a specific location in the final answer. You get a new binary number, but the value of that number isn't important. What we care about is whether it's 0 or not. When you receive a board position that produces 0, that's bad.

3) For your next move, make the sum 0. If at all possible, that's what you want to do. And while you're at it, remove as many matchsticks as possible to do so. Making a move in Nim (I should have said this before, but there's no time like the present) means choosing a row and choosing a number of matchsticks to remove from it. You can remove any number of matchsticks, but only from one row, and you have to remove at least one matchstick. Do this so that the process in step 2 passes a 0 to your enemy to deal with, every turn (except the last, when you craftily leave them the final matchstick so they lose), and you will definitely win.

Step 2 actually amounts to performing the XOR (exclusive or) operation on the numbers, something very easy to accomplish with gizmotry, and a core operation in computing. It's also very easy for humans when we line the numbers up vertically on paper, as in normal addition. Step 2 may sound complicated, but I could teach you to do it in about one minute, even if the words above make no sense.

Following this strategy assures a win if you are the second player. If you are the first, it assures a win if your partner (opponent) makes a mistake. Otherwise, they have the advantage and can always win by following the strategy exactly. Summary: be the second player. Other summary: if you're the first player, you may be out of luck, but try to distract your opponent. Skeleton key: use the strategy.

What about Geiger counters? I was getting to that.

Ok, the Nimatron was born in a moment of inspiration. Edward Condon was working with Geiger counters, and he suddenly realized the internal mechanics could do something more than just counting. (I don't understand the details, as I've yet to research the claim "he realized that the same calibration circuits used in Geiger counters (although built with ordinary electromagnetic relays, not by valves), can be used to represent the numbers defining the state of a game.") Fast forward a bit, and he's built this clever party trick that's perplexing people at a New York City fair.

What just killed Edward was that he could have gone about a million steps further, without too much additional effort. The same principle and the same components are what got Konrad Zuse's contraption calculating, well, uh, in theory, anything that could ever be calculated. That's more than a game of Nim. No matter how much you like Nim, you have to admit that "anything that could ever be calculated" is a rather larger prize.

You may think I've now connected Geiger counters and what makes human minds special, right? But there's something I've kept up my sleeve, and if you've been reading the science news lately, you may have noticed.

Breaking news, then. Some neuroscientists have been examining layers two and three of the human cerebral cortex. These are the layers of cells that seem to be most unique to us. They're much thicker in humans than in our closest animal relatives. The cerebral cortex itself, of course, is also much bigger, but just blowing up the square meterage isn't necessarily going to make for qualitatively different experiences and powers. Something about layers two and three, though, stood out. What stands out on closer examination is that the cells in these layers can compute XOR.

This may not seem impressive. After all, I promised I could teach you to do an XOR on several binary numbers in one minute. And it seems to be the easiest thing a computer can do. Why is this impressive at all?

I know, it doesn't seem impressive, right?

Let's talk about neural networks for a minute. The earliest attempts to model human brain tissue resulted in what are now called, depending on context, finite-state automata, regular expressions, or regular grammars. To put it simply, when you sign up for a new account online and it gives you some rules for what your password should be like, the code that checks your password against those rules is a finite-state automaton. More to the point, the rules themselves are represented as a finite-state automaton. It's sort of like a mini, prototype brain that can only say whether your new password follows the rules given. You can actually think of the rules as the finite-state automaton (or regular expression, or regular grammar—the analogy with grammar, rules about letters, may possibly make more sense now). It's primitive, but that was our first semi-successful attempt to model brain tissue. We now use it all over the world trillions of times a second, if not more, as these conceptual rule widgets are at the heart of all coding languages in practical use. These are not neural networks, mind you. That was the next innovation.

Neural networks, to make a long story short, and I mean the artificial kind (ANNs, for artificial neural networks), use matrix math to try to match human pattern recognition and pattern creation. These networks of matrices can approximate any information process you want, using calculus behind the scenes to learn the right numbers to put in the matrices. This is much like, though not identical to, the way your brain can learn a vast array of different processes just by tweaking some connections with experience. And that is no coincidence. We were trying to mimic human learning, and, though it took a couple generations to get things moving at a good clip, today I think we can say we've succeeded far better than most people believed we would, even many who were working in the field.

"XOR, you promised XOR," I hear you saying. The most famous result early in research on artificial neural networks and their properties was that these networks, though they could learn many processes, could not learn XOR. This one little observation put a chill on the field for decades, because people who didn't entirely know what they were saying went around repeating the finding to each other. It became fashionable to make fun of ANNs for being overhyped, so overhyped they couldn't even do XOR. Ha ha. How silly those ANNs are.

ANNs can do XOR, actually. It just takes more layers of artificial neurons. While it is true that, using the basic matrix method we knew about decades ago, a single neuron (or even a single layer of neurons) cannot do XOR alone, just by looking at its inputs, it is also true that groups of neurons working together can do the same work as a single XOR operation. It's a group effort, if you will.

This may sound like a limitation of artificial neural networks, but actually all biological neurons examined until perhaps this last year—whenever the new finding occurred—had the same limitation as the simplified, mathematical, artificial neurons I've been talking about. There are no chimpanzee neurons that are known to be able to do XOR. Not without teaming up.

Yet the "most different" part of our brains has just been found to contain neurons that, completely on their own, can compute XOR.

So when I said I could teach you XOR in a minute, that's a funny statement. You have many cells that are doing XOR independently. I'd just be showing you how to do what a single one of your brain cells already knows how to do!

***

Maybe you'd use this capacity intuitively if you were playing Nim by the winning strategy. Instead of tallying things up, as in the 3 steps I give above, you could use your eyes.

As we scan the rows of matchsticks, we're looking for powers of 2. That's 1, 2, 4, 8, 16, 32, 64... but the only ones that show up for 16 matchsticks arranged into 4 rows are 1, 2, and 4. The setup of the game shows you: a row with 1 matchstick, a row with 1 matchstick and then 2 matchsticks, a row with 1 and 4 matchsticks, and a row with 1 and 2 and 4 matchsticks. That's the result of grouping into powers of two.

1
1 + 2
1 + 4
1 + 2 + 4

Which on the board looks like

|
| | |
| | | | |
| | | | | | |

Personally, I find it easier to think in big numbers first here, like this:

1
2 + 1
4 + 1
4 + 2 + 1

Mathematically it's identical either way, for the purposes of Nim (and addition, and XOR).

So you could see this:

|
| |   |
| | | |   |
| |   | | | |   |

Or this:

|
|   | |
|   | | | |
|   | |   | | | |

Or lining things up:

|
|   | |
|         | | | |
|   | |   | | | |

It's all equivalent.

You want there to be an even number of every group.

Wherever you see one of the clusters, look for another cluster just like it to cancel it out. If you don't find one, or if the other one has already been used to cancel another cluster, then the board is unbalanced. Every time you move, you want to make the board visually "balanced" in this way. That's it. That's basically the entire strategy. With only a few minutes of practice, you can "see" it. In this first board position, everything cancels out with another group.

I
I   | |
|         | | | |
|   | |   | | | |

|
|   | |
I         | | | |
I   | |   | | | |

|
|   I I
|         | | | |
|   I I   | | | |

|
|   | |
|         I I I I
|   | |   I I I I

Let's revisit XOR now. XOR in general means: "We're good as long as exactly one of the options passes. No more, no fewer. One."

Either you eat an apple, or you eat an orange, or you eat a grape.

Feel free to skip the next paragraph if this is crystal clear.

This rule is being followed if you eat an apple. It's being followed if you eat a grape. It's being followed if you eat an orange. It is not being followed if you eat nothing. It is not being followed if you eat an apple and a grape, or an apple and an orange, or a grape and an orange, or an apple and a grape and an orange. Just want to be complete about this XOR example! Many things are obvious until they aren't.

Apple XOR Orange XOR Grape. Pick a fruit and stop, and you're following my rule. Take exactly one.

The player who picks up the last matchstick loses, so perhaps you're thinking ahead and you see how that implies XOR. I'll give you a hint: the last. But it doesn't work how you'd expect. It works in the opposite way.

For our Nim strategy, the XOR we use means something like: "If there's exactly one uncanceled group for a given group size, then that's good for Satan." (I mean our opponent, sorry. Just trying not to mince words.) Basically, we want to break XOR every time it's our turn. We don't want to hand the matchsticks over in an XOR-abiding situation. We want to keep the rule broken for our opponent every time they see the board on their turn. There should be no lonely "exactly one" cluster. All the groups should be balanced, fully canceled out. It's a little counterintuitive, because at the end, on our opponent's last turn, the moment they lose, they'll be looking at nothing but an uncanceled group of 1 matchstick. You see, if you resist leaving the board in an unbalanced, XOR-abiding configuration, then because of the way the game works, this means you can put it in an XOR configuration in the final moment. On your last turn only, you make the board XOR-friendly. The player who picks up the last matchstick loses.

This was the first computerized game because the strategy's logic is so simple to apply with XOR. And the game is still kind of fun, because most people won't figure out the strategy on their own. (I didn't. In a few dozen games trying to figure out how the computer kept winning, I worked out that it was probably related to even versus odd numbers. That's as far as I got.)

Maybe detailing XOR for Nim hasn't helped anything. But I will claim that it quickly becomes possible to see the strategy. And after all, maybe that suggests individual neurons are handling the XOR. Is that necessarily true? No, I see no reason to believe that. But when you watch your own mind clasp these either/or calculations visually, and then looking for the odd group out becomes effortless, you have a more visceral appreciation of what those individual neurons just discovered are doing.

mardi 7 janvier 2020

Too specific

One of the toughest things about being introverted, in my experience, is that you develop relatively uncommon passions. By spending a lot of time alone, you can discover the most intriguing things in the world, to you. The trouble is, you've spent a lot of time alone (not meeting people), and you've discovered something rarely appreciated. Now you realize that no one you know gives a shit, at all, about this thing.

For example, I absolutely love microtuning. And saying that, I realize that probably most people don't even know what that is. And I'm not trying to be weird. Remember? I discovered my fascination with microtuning by spending a lot of time alone, specifically not interested in what anyone else found interesting, really, but in what I found interesting. I feel that many people have this sort of assumption that your interests are pitched to impress them. And if they aren't impressed, they resent that, or you. So I don't really talk about microtuning, even though it's one of the greatest things ever. And just now, the prompt for this screed, I was looking regretfully at a Wendy Carlos video on YouTube whose tab was still open, because I loved the video. What could I do with it? Who could I send it to? Who would like it? Sadly, I don't know anyone who would like it, probably. If I do, I can't think who it would be. Oh, I can drag up a few people who would semi care. But it would be a case of them trying to humor me; I thought they'd like it, and they kind of find something to say, or they try but don't. (Here's the video. It isn't actually about microtuning, it's about synthesizers, but Wendy Carlos is a big pioneer of microtuning. Here's a great article she wrote on one small aspect of microtuning, one of her own discoveries, actually three of them.)

The television series Atypical shows this pretty well, in a way I find extremely relatable as an introvert. The main character is absolutely obsessed with penguins. He loves all the details. He's an encyclopedia on penguins. He wants to share his love for penguins. No one gives a shit at all. They look at him like he's an idiotic nuisance. He has autism. But I feel like he's me, just with a layer of social anxiety removed. If I were on the screen as him, I'd talk about penguins 1/5 as much and feel really anxious about how everyone thought I was boring and annoying and oafish and interested in penguins for some sort of dark reason. Either way, the fact no one seems to share our interest is this big social/emotional problem, in the end.

I guess maybe part of the difference is I try to get people interested? When tutoring, a lot of what I'm doing is trying to make my enthusiasm for a topic infectious. In my eyes, it isn't just "I like imaginary numbers." It's "imaginary numbers are just objectively incredibly awesome." And sometimes I do succeed. Students are often happily telling me, "Wow, I never thought of it that way before." Friends tell me that I have all this energy and enthusiasm for neat stuff, stuff no one else talks about, and they're always learning stuff from me, and they love it. At least, they say that when they're being nice to me ;)

lundi 6 janvier 2020

Start

During a transcontinental video chat with my brother, we talked about The Oregon Trail for a few minutes. We are technically Millennials—after all, the word describes children set to graduate from high school in and around 2000, that is, youths expected to leave home for college at the turn of the millennium. The word was coined for, or first spotted in, a newspaper article in the mid-80s. A class of 2000 kid, I'm as originally Millennial as you can get. And I like that. Some people ask me why I like that.

We first-issue, Ice Age Millennials, we're a little different from the canonical Millennial born near 2000, and we're sometimes not included in the category. Historians seem to consider us distinct, or distinct enough: a "micro-generation." And my favorite name for this micro-generation, bar zero, is "The Oregon Trail Generation." We did. We did, in fact, as far as I've ever witnessed, all play The Oregon Trail in school. We have that in common. We also saw the internet appear under our noses and bike helmets. Most of us remember when no one had a computer in their home, and remember the transition to a few, and then a few more, and then many more, and then everyone having computers, and then computers on the internet and the same adjustment with phones. We grew up right when that change was happening. We saw the tsunami hit. We remember hearing about the tsunami on its way. We remember never having heard of the tsunami.

Computers, of course, had existed for a long time, and even existed in homes when we were born, just not in most homes. Computing was a fringe hobby or a thing to do at work for a small part of the population. You could comfortably go through your first several years of life and have no idea. Even though both of my parents worked in telecommunications, teaching IT and the fundamentals of the internet to professionals around the country (we often stayed with the neighbors upstairs when both parents were off in other states giving seminars), I never saw a computer until I was in Kindergarten. Even though they had met each other at work when they both coded in assembly language (my mom was this amazing engineer called in to fix a router no one else had been able to fix, and she did fix it, which apparently took electrical meters and solder), I never saw a computer until I was in Kindergarten. Alphabet soup about "o-s-i" and "t-c-p-i-p" and "packetswitching" and "networkprotocols" and "itsoverethernet" at the dinner table aside, I didn't even know these gadgets existed.

The Oregon Trail is a surprisingly old game. We think of it as a cutesy educational sim about dysentery and fording rivers, as edutainment dull enough to be vetted by our teachers, ratified as a classroom pacifier. But actually it's one of the earliest of all computer games. It was drafted from paper prototypes in 1971 by a college student teaching his first class, 8th grade history. To help his students understand Westward Expansion (for foreigners, that mid-1800s swarm of pioneers setting out across North America in wagons), he came up with a game idea, a game that could be played with cards. History might easily have swerved down the normal road, but the school had access to a time-share teletype computer, the HP 2100. Wait a second. Most younger people don't know what teletype is, so I'll give you the rundown: you sit at a typewriter, and you type on paper, directly onto paper, as you normally do on a typewriter. But magically, in the background, somewhere, somewhere way out of sight, perhaps in the basement, perhaps not in the same building even, a computer catches drift of your typing, and it grabs control of the typewriter and types responses back to you. It's like using a command line, only you type on paper with a typewriter, and sometimes it springs to life and types back to you. This leaves a paper record of your entire "conversation" with the machine. Get it? (I've never used teletype. I hope this is accurate.)

That's what the original version of The Oregon Trail (called, at the time, just OREGON) was like when it ran in class on December 3, 1971. For a little perspective, many people consider PONG the first video game. PONG came out in 1972.

The truth is, PONG was not the first digital game at all. But for many, many people, it was the first one they ever saw or played. That's because it was the game that made it into arcades and living rooms. It was the first blockbuster. In fact, arcades were practically thrown together from scrap metal around PONG. Well, arcades as you think of them, as places you meander and play video games in cabinets and people scoot in next to you and join in your personal fray with a coin, sometimes asking, sometimes not, just interrupting mischievously. You find yourself two inches away from a stranger—a new friend?—who is there specifically to trounce your best efforts, sometimes giving you a tip or two, a kind word, a boast. Reaching deep into mist: I remember arcades—mostly at the mall, malls would have them. Now they don't. It isn't something I think of often, but I remember asking my parents for change and going off into a bewildering dark room full of strangers and spots of bright light and all kinds of sound and shouting. It was scary and thrilling. It was an arcade. And these dark-lit rooms were everywhere, but I remember when the entire concept and its peculiar outline struck me: that's what this was, I'd seen it around and never fully taken notice, this was an arcade. This was where bigger kids went to play and socialize. How many of them in here knew each other? Who was I and what was I doing? It was a little intimidating.

The Oregon Trail predates that entire culture. Amazing, isn't it?

Mostly, I missed that culture—it was a little before my time, you see—but OREGON threw a curveball around the decade. Its name clung like a burr to a generation. The world changed shape just so, and an idea stuck in all our minds.

Now, here I have to admit that by saying The Oregon Trail predates arcade culture, I can only mean the first version of what would, in wanderings round the sun and release cycles, become The Oregon Trail of 1985, a game with animated graphics and awful music that had gotten all the important parts of its design finally right, and was adopted everywhere and widely copied, both by its publisher MECC (The Amazon Trail, anyone? Africa Trail? The Yukon Trail? me neither, but they exist) and by others scratching their chins thoughtfully. Nevertheless, the session that ran on December 3, 1971 was the same game at heart. Anyone who played the breakthrough version would recognize it.

The version we played was the 1985 "breakthrough" one on Apple ][, or maybe the 1990 on PC, or both. (They seem equally familiar, and they're almost identical.) Funnily enough, I played very, very little of it. But not all experiences must be plural. If you met an astronaut as a child, you wouldn't hedge and say, oh, you only met the astronaut a little bit such a small number of times. YOU MET AN ASTRONAUT. We arrived in Oregon once or twice at least. It worked on me, somehow.

In 2014, TIME gave its game of the year award to 80 Days. At first I didn't make an association with OREGON, but foreshadowing there thickly is. Instead of a rattlesnake-infested continent in 1848, you plot around the globe of an alternate, steampunk 1872, using boats and cabs and zeppelins and other vectors to infect another country with your... wait, wrong game. The tally of all possible itineraries is more than half a million words, written predominantly by British-Indian Meg Jayanth, who works in subtle comments on gender and colonialism. It's the best interactive story-game-travel-thing I've ever played. Interactive travelogue? Travel emulator? Yes, the best of those, and ultra-impressive as an example of interactive storytelling that is also undeniably a game. The blend is unusually natural. A year later TIME named The Oregon Trail the 9th best video game ever. We can, of course, take everything they say with a grain of salt, because they named Tetris the best video game ever.

The mini-lecture I gave my poor brother over Messenger video chat led to his asking what actually was the first digital game. And I didn't know off the top of my head, partly because it really depends how you define "the first digital game." There are a number of answers, depending on the definition you choose. So, climbing a few ladders through my psychic library of images from "History and Future of Immersive and Interactive Media," a class I took a few years ago in graduate school, I drew out Space War! as an example of just about the first, the first true video game, something everyone would agree was exactly that. But I wasn't quite right, and I knew I was dropping my notes off the ladder. If you want live, animated graphics to interact with, the honor goes to a game that was created 3 or 4 years before Space War! (1958 versus 1962). That game is Tennis for Two.

Time for a deep breath. It's nice to pause.

Both of those games, both of the two earliest definitely-video games, are actually quite pretty to watch. You may be surprised to hear that Space War! has 1024x1024 graphics, and that doesn't do it justice at all, really. The game ran on an air-traffic control screen, which is why the resolution was so high for 1962 (Jiminy, botswain, even for 1992). The phosphors in the screen produce a dazzling, scintillating, cloudy contrail effect that just firing up the code won't show you. And it ran on the DEC PDP-1, which was the state of the art in computing. Three people created this game at MIT specifically to take the PDP-1 to its absolute limit. It uses every feature and every bit of sail in the new flagship. It was applied not just for fun but also as a "smoke test," a way to make sure the machine was in perfect working order. Space War! is the absolute unblinking state of the art in computing for 1962. And maybe that's why it's still incredible. Here, watch this video if you don't believe me.

Tennis for Two is also entrancing. We didn't have a bad beginning. You can watch it here.

All right, but I was getting at something under the surface. Computer games aren't all graphics, you know? We were just talking about The Oregon Trail, a teletype game from 1971, and we were happy to consider it a game as much as its 1985-and-later descendants, which eventually sported lessons on flora, ancient photographs, recorded voices, and actors in clothes from the era. The Oregon Trail of 1971 could only type on paper. Yet it was clearly a digital game. So what was the first? Where do digital games begin?

Well, this is where I also mentioned the Geniac and Brainiac, two home hobby kits for children who wanted to build computers in the mid & late 1950s (or more like it, for their parents who wanted to give them a head start and take a crack at this themselves). Designed by Edmund Berkeley, who created Simon the first personal computer several years before, these were not "real" computers in the sense that they didn't have a CPU or transistors or even vacuum tubes. They were little more than circular breadboards made of sawdust. Figuring anything out with one of these was like stringing together an electrical abacus from spare Christmas lights. But they could compute, surprisingly, and a book of activities came with them, games for hobbyists to manually "program" into the computer with wires. These lessons were presented as experiments, with names like "EXPERIMENT 38: THE FARNSWORTH CAR POOL" and "EXPERIMENT 11: THE MANGO BLOSSOM SPECIAL." One of the games, "EXPERIMENT 49: THE URANIUM SHIPMENT" (see page 50 here and better yet this video), was recently uncovered as the earliest known digital interactive story. Actually, most people would call it analog, not digital, though it operated on bits, so I'm unclear on whether it should be called both analog and digital, or just one. (Update: technically, it's digital. By its appearance and operation by manual rotation, I think we can be forgiven for thinking it's analog.) Either way, it was the first changing story mediated by a computer. Certainly it didn't have graphics, but also, certainly, we've got to agree it's more primordial than The Oregon Trail.

What is the first code ever written for a game? Now that's a story.

It was the Ur computer opponent. Before anyone knew a computer could play chess, someone was working intensely on making that happen. Doing fundamental research in machine learning about a decade before "AI" was coined for this pursuit, Alan Turing gradually wrote a program called Turochamp with another mathematician, David Champernowne. You may recognize Alan Turing as the protagonist of The Imitation Game.

Turochamp was so experimental it couldn't handle all the rules of chess, just a subset. It could play this watered-down version of chess by looking two moves ahead at all possibilities. But it was so demanding compared to 1948-1952 computers that Turing simply couldn't get it to run. He kept trying but never succeeded.

Not to be defeated, he challenged his friend to a game of chess. And he pulled out some paper. He had brought his code, a stack of pages sitting next to him. They started playing. When it was his turn, he would trace through the code by hand to find what the algorithm said to do. It was a bit like an old mechanical Turk, those contraptions that supposedly could play chess and defeat royalty and statesmen and master players in the 1800s, always turning out to be a hoax with a guy sweating inside the robot (or "automaton," as the word "robot" comes from a 1920 Czech science-fiction play called R.U.R., short for Rossum's Universal Robots, or Rossumovi Univerzální Roboti before translation) pulling cables, only Turing wasn't hiding. He was sitting across the table from his friend, moving the pieces himself once he knew where to move them. That is, once he'd determined what the code said to do.

It would take him half an hour or more to calculate every move, whenever it was his turn.

His friend won in 29 moves.

Tragically, Alan Turing died in 1956, about 4 years later, still never having gotten the code to work on any of the sorcerous machines he had helped invent. The code itself appears to have disappeared, though the basic approach it took is well-known. Not incidental to our larger story, Turing died by suicide, eating an apple laced with cyanide. This could be interpreted as a reference to Snow White or to Adam and Eve and the fruit—that is, perhaps to the religion whose presiding morals led to his mistreatment. Many say, but I cannot confirm, that Turing's apple, bitten and laced with knowledge, is the origin and meaning of the Apple logo. (Hi there, cute little Apple ][ running The Oregon Trail.) In 2013, the Queen of England officially pardoned him for his criminal conviction of homosexuality, which had resulted in his chemical castration and probably also his chemical suicide.

In 2014, The Imitation Game, about Turing's life and what he did for the WWII war effort, saving countless lives by code-breaking Germany's Enigma cipher, won many rounds of acclaim internationally. In terms of his personality, it is not accurate, but it's a great and broadly true story. The title refers to his famous Turing test, a baseline concept for roboticists and lifelike AI developers ever since. The test, in short: What percentage of participants would believe an AI meant to seem human was human? Can we tell the difference? Can we be fooled by an imitator? This idea is made extremely vivid in the movie Ex Machina, which has an ending that can be read as misogynist, but I think that particular interpretation misses the mark, though it's interesting and very much worth discussing (some of why we have fiction). Anyway, the resident math consultant on the crew of The Imitation Game was a well-known game designer who also happens to be a mathematician and cryptographer: Jon Ingold. In fact, the world is an exceedingly small place. Ingold was instrumental in creating 80 Days, specifically as one of the project's two directors and the inventor of the story system it runs on.

Curiously, he had created a little experimental game for that system called The Intercept in 2012, either before or during his collaboration on The Imitation Game. Both are about Alan Turing's work at Bletchley Park. There's so much overlap between The Intercept and The Imitation Game that when I saw the latter, I felt certain that the teacup in the interview scene was a direct reference to the game. It was only when the credits rolled that I saw Jon Ingold's name there and couldn't believe my eyes. But it's true. Same guy. And that was a real reference in The Imitation Game to a tiny, obscure, but rather brilliant bit of interactive storytelling (or vice-versa?). Check it out. You can play it in five minutes, and the meanings of moments along with the paths to them change when you replay and choose differently. It's surprisingly clever.

In 2012, about when The Intercept was spun off and The Imitation Game was in development, a historically reconstructed version of Turochamp (remember Turing's and humanity's first code for an AI opponent and arguably the first computer game?) played Gary Kasparov, this time running correctly on an actual computer. Kasparov won in 16 moves. The win can't have surprised him much! He'd gone from the top of a mountain, his chess matches casually and regularly mentioned on the evening news around the world, to sound defeat in 1996 by one of these highly suspect mechanical Turks. The success for IBM's Deep Blue had taken 48 years of AI development after Turing's and Champernowne's first draft. Presumably, Kasparov had beaten every computer nemesis he'd faced before then. But that day in 1996, the great imitator, the computer, had prevailed. Kasparov, the world champion at the game, once refused to believe it would ever happen, famously. Then he had to believe it when it happened. And there was no going back from that shift—in a symbolic sense losing chess for the entire human team. And so this mirror image of a rematch in 2012, a piece of time travel like a Terminator appearing before it was created, was a unique moment. He gave a speech afterwards, saying, among other things, "I suppose you might call it primitive, but I would compare it to an early car—you might laugh at them but it is still an incredible achievement."

So that's the origin story of the computer game. It finally came out in 2012, and the world's greatest (and also most eternally defeated) chess player beat it, but didn't really. Whether you would call the world's very first example a video game or not (I would say clearly not, as no graphics logic was ever involved), and whether you care that a curious gonkulator preceded it in 1947 as the first "interactive electronic game" (this really was an analog mechanism, not a digital processor, and rather more tinker toy than game) or a custom-built Nim player in 1939 (no code and not a general computer, but electromechanics that could do just one thing, as if "EXPERIMENT 49: THE URANIUM SHIPMENT" were burned into a Brainiac that could never change its tune; still, technically a digital computational game with light bulbs instead of graphics, built from relays mere months before the first real computer), Turochamp is today indisputably the first code of any kind written for a game. Because it was never produced, we could also call it the first example of game vaporware.

Let's take stock of the panorama, then. There were various efforts to run simple board games electronically—nim, tic-tac-toe, checkers—using displays made of bulbs (starting in 1939). Turing and Champernowne wrote the first game code, also for a board game, this time a simplified chess (1948-1952), kicking off a strange and lengthy saga. And Tennis for Two (1958) and Space War! (1961) ushered in what we'd all recognize as video games. In terms of creative expression written for a participative machine that could do other things, the common ancestor of The Oregon Trail, PONG, The Uranium Shipment, SimEarth, Grand Theft Auto V, Braid, Undertale, Soma, Five Nights at Freddy's, 80 Days, Sunless Sea, Divinity 2, Surviving Mars, and Where the Water Tastes Like Wine, to name just a very few examples, sprang from the restless mind widely regarded as having created the computer and the field of AI. Maybe that isn't a coincidence after all.

Myself, I suspect that games began—before any system of writing, as we know from artifacts—like the instinct of play itself, as a way for us to learn about what's outside of us. And I suspect they will end as a way for computers to learn from what's inside of us. Instead of playing just to learn, we eventually play to teach. But that's a story for another day.

Have you gathered up and pieced together, with just a few touches of superglue, why I'm proud to be called one of "The Oregon Trail Generation" and why I prefer to be included as a Millennial?

See, Alan Turing was right about several things. He was right in calculations that broke a secret code and shortened a world war. He was right about AI before there was a name for it. He was right to be gay when it wasn't allowed. He was right about the deeply universal nature of computers before any computer even existed. And he was right about this: games are important. They're more than some fool's stray pomposity, though. "Important" means nothing by itself. The form is endlessly ready to represent ideas and feelings. It gives us voice to simulate and explore and tinker with natural and artificial systems, real and imaginary, past, present, and future. It teaches and re-educates us about working together, about improving patterns that aren't good enough. No high count of skeptical parents, time-sink releases, derogatory references to wasting life playing games, or starched suits to impress will reduce that essence. I'm proud to be in The Oregon Trail Generation because I understood. The more of us who understand, the better, and better loved, our Earth can become.

samedi 4 janvier 2020

Are you magnetized?

I've had so many conversations with friends and family and students and colleagues about political polarization, and I find it a fascinating topic. What's essentially happening, we wind up agreeing, is cognitive dissonance in a roiling stew of all the human biases we all inherit.

Cognitive dissonance falls under an area of psychology called "attribution theory." When Star Wars Episode I came out in 1999, many people saw it and said they loved it. For example, I'm not a huge fan, but I saw it a second time and even a third time in the theater, and that wasn't strange and I didn't have trouble finding people interested in seeing it again. After Episode II (I insist it's by far the worst), a tide had turned and most people said they hated Episode I. Both of these extremes resulted from many people unconsciously adjusting their opinions to track the way people they cared about felt. If attribution theory sounds at all interesting, I highly recommend reading more about it. But in short, it's about how we explain to ourselves how we feel about third parties (neighbors, clothes, theories, cities, countries, sports teams, foods, all the "nouny" stuff), and how we influence each other's evaluations without even realizing this is happening. On that note, attribution theory is even able to predict social polarization and illuminate why it happens using math (look up "Heider balance," well ok here). There's a natural underpinning to how, in countries whose constitutions do not recognize political parties, two big parties will form, rather than, say, three big parties, or four big parties. It's a reliable pattern. Attribution theory and its concepts of cognitive dissonance and Heider balance explain why.

A while ago my dad casually asked what cognitive dissonance really means, anyway. He thought he knew what it meant, but he wanted my psych major take on it. So here you go: cognitive dissonance is the way you reject ideas you disagree with, before even realizing that you're rationalizing instead of processing the facts well. Instead of getting curious, which would be the most appropriate response to someone challenging what you believe, you go into defense mode like a box turtle. This thing you hear clashes with what you believe for sure must be true, and rather than experience this clash inside yourself, rather than feel shaken up, rather than letting that sit and percolate, rather than (with practice you can do this) enjoying the intensity of uncertainty and confusion, rather than exploring with questions or searching for evidence against your own view, you take a potshot at the source of this discomfort. You say something that makes the person look stupid, or like a bad person. You make fun of them, you make fun of the idea, you point out some detail you perceive to be a fatal flaw, and then you go about your day as if you just won. And for cognitive dissonance, you did: you just avoided thinking, and you did it in a way that made you feel clever and right. Voilà, the very real and ubiquitous problem of cognitive dissonance. It's the uneasy feeling that prompts you to shoot the messenger, metaphorically or (God forbid in this day and age, but it's happened countless times in history) actually. I hope you can immediately see how this would lead to political polarization. We go all-or-nothing. It's easier. Still, it's nice to know that science has been examining this for decades. We know things about it. Knotty as it may be, the problem is—I am totally convinced—possible to solve.

I explained to my dad that, in my mind, cognitive dissonance is actually pretty relatable to musical dissonance. He loves music, so I thought it would help to add this. We use "dissonance" as a metaphor outside the world of music. When you're trying to sell someone on the need to upgrade equipment in the lab, you want to avoid "dissonance" in the message you give them. Your let's-upgrade-this pitch should be self-consistent. It should make sense. You don't want to tell them you really need this new microscope, but there's a better microscope coming out in two years, but you need it sooner than that. That's dissonant. In music, we have different preferences about dissonance. Some hate it and want all the notes and chords to be consonant, to be sweet-sounding, to avoid clashes. But without conflict, a story is boring. Songs and symphonies and so on, they use this idea of conflict as well: they will introduce dissonance, in other words conflict, and this will make you feel tense. There will be chords that sound tortured, or there'll be a lot of distortion on the instrument, or the regular beat will break down into chaos. And then the song will resolve the tension. Structure will rebuild itself. It'll be beautiful. Some people get used to a lot of dissonance. Some people wouldn't like music without it. For example, I actually kind of like the sound of a cat on a piano; my ear grabs bits of it that I like. I can hit any two keys on a piano and it sounds good to me, or any three or four, and I can add more random notes than that, and even if I don't think it's great, I can appreciate what I'm hearing, and perhaps tweak it until I really love it. So I have quite a strong taste for dissonance in music. It can be an acquired taste, but maybe for some it's natural. Anyway, to return to the topic at hand, when discussing ideas, it's actually very similar. We have different amounts of taste for, and tolerance of, "dissonance," that is, ideas that clash with each other and with our own beliefs.

But with experience, or maybe also by nature depending on your personality, you can appreciate the adventure of that conflict. You can enjoy it even when it hurts, like very spicy food. When you get to that point in discussing ideas, you're handling cognitive dissonance well. You can hear something you totally disagree with and find totally unacceptable, hear it out, discuss it, ask questions to learn more, etc, and you won't lose your mind or hate the person you're talking to. (Note: this is unrealistic and profoundly unfair to expect of someone who is being abused or oppressed, especially on the topic of their own oppression.) You will discover that this does not make your brain leak out of your ears, nor does it make you morally worse as a person. You can listen to the "Devil," get how the Devil is confused, and walk away not as a bad person, or a person convinced by vile views, but as a person who can understand how a human would think this way. People who are wrong tend to think they're right, just like you do. And by the same token, with such strong practice becoming a routine, you'll also be open to new ideas that seem astoundingly wrong at first yet are actually right, and you'll be able to hear criticism that's overly harsh and still derive usable information from it.

Hopefully this description of cognitive dissonance in action (and the possibility of growing through it) is more helpful than a textbook definition that you'll simply forget after the test, as it were. Specifically, to recap, cognitive dissonance it that natural discomfort that leads to arguments and unwillingness to understand another position. It's the entire reason we have the phrase "don't shoot the messenger." It's why we seemingly miraculously ignore evidence and logic that doesn't support our preexisting notion (when it's important to us). Rather than reevaluate anything or admit to a sliver of ignorance (a good practice in a forward-moving debate), we lob verbal Molotov cocktails, either disguised as rational objections, or as direct "gotchas" about the disagreeable messenger.

So let me talk about how to use this idea constructively.

For example, let's say you're talking to someone who opposes the expansion of rights and protections for a minority group. For fun, let's pick first-generation immigrants. (I am one, though most people don't know that on meeting me. It isn't something I'm sensitive about at all, I'm just vaguely proud of it. But I'm admitting which side I'm on: I'm pretty much anti-borders and pro-kindness.) Rather than getting into a distracting confrontation with your conversation partner about whether a foreign minority is actually a group of real people (word to the wise: the person you're confronting will inevitably deny that this criticism of prejudice applies to them), I've found it's better to focus on less barbed ways of seeing. For example, the way I think about minority struggles myself is in terms of "being welcoming." It isn't just that we don't want to treat someone like they don't exist or aren't human. In the conversation you can start there if you want, at that low bar, but talk about yourself and your experiences and preferences, rather than accusing the person you're addressing. Know that if you accuse a xenophobe/racist of treating people like they aren't human, they'll usually deny it, and some of them will even totally believe what they're saying. Curiously, people have a hard time asking themselves whether they're treating someone as if they're human. Frustrations can lead to resentment, and resentment can lead to disregarding someone's subjectivity to the point of actively hurting that person without really recognizing it or caring. Wherever it was that I first heard the issue of race and other minorities described as "welcoming," I'm grateful for hearing it that way, and I'll try to share that way of seeing. Whether you think someone deserves XYZ or not, the question of whether we are making sure that we are welcoming to people not like us is actually pretty easy for most people to understand. It raises the bar and it's easier to think about at the same time.

By talking about "being welcoming to people who are not in the majority," we avoid harshness that would heighten the cognitive dissonance someone is already feeling and fleeing instinctively. It comes down to seeing the existence and normality of cognitive dissonance, knowing for sure it will arise (with some appreciation of why), and then using that understanding to communicate with someone who likely wouldn't listen to you if you just gave it to them in plain words without any icing. You may not agree here, but I'll make the claim: if you swapped brains with each other, you wouldn't listen either. While directness is great, and I'm a big fan of directness, there are times when directness will totally fail. Usually that's closely related to cognitive dissonance. Don't thank me now. Just use this truth as well as you possibly can.

It'll make a difference. It can change minds in ways yelling never will. People who are experts in getting through to racists and religious extremists know this. Compassion, openly discussing what you otherwise find to be a hateful view and trying not to judge it, may seem totally misplaced, but it works much better than what you want to say. If you want to say something, tell a personal story and admit that it's just your experience. That's how you can share how you feel, and why, without putting someone's defenses up so high that cognitive dissonance closes their ears and eyes. If you disagree with someone intensely, provided of course that you can manage this, what you need is to show that 1) you care about what they think and feel, 2) you want to understand them and their view and why the world seems that way, and 3) you aren't judging them harshly. Ask questions and follow-up questions. Don't expect a sudden change, or even any change at all. Don't crowd potential like that. You may be surprised to find, though you shouldn't expect it, that this person cares about your stories and possibly even your opinions and your reasons for holding those opinions. They may listen without objecting and seem interested. After you care about their views and see them as fully human, see? Not before. And the good thing is, relating to people is much more pleasant for everyone involved than a shouting match followed by two people canceling each other.

This has been called "nudge theory," which is admirably descriptive. But it's really just understanding cognitive dissonance and applying that understanding. Often the people who most need to hear your words will hear them least, unless you take steps.

Everyone's in some kind of majority and some kind of minority. Everyone. We do not have equal experiences of this at all, but we all have some experience. Just about everyone knows what it's like to feel everyone is against them on something. For that matter, the person widely accused of racism or sexism or nationalism or classism or nepotism or despotism or any bad -ism, that person will probably be feeling singled out, will be feeling they're in a position of, well, misunderstood and unappreciated minority. Odd as it is to think of it like this, they may feel like they're standing up valiantly for the truth, even against the whole world. And if they don't feel that way, they can probably remember at least one time in life so far when they did feel embattled like that. If you are ever in the situation of talking to this person, telling them about a moment you have experienced or witnessed could bring them an opportunity to make the connection between the way they feel (misunderstood and judged and treated like they aren't fully human), and the way others feel (misunderstood and judged and treated like they aren't fully human). It's just... there are ways to bring an opportunity that have a good chance of working, and there are ways that have almost no chance whatsoever of working.

I hope that isn't too barbed a way to say it, "almost no chance whatsoever." I hope I haven't given you too much cognitive dissonance—leading to defensiveness—about this ineffective approach we all tend to use before understanding, but I'll leave that concern in as a question for you to consider. Maybe I have stirred up too much dissonance. We need to ask ourselves these questions. And other questions, too. Our best weapon against cognitive dissonance is simply asking questions.

Which way do you choose?

Why?

What is your evidence that it works?

And how are you looking for a better way?