Whither consciousness?

We’ve been able to manipulate our consciousness since the beginning of human existence. It’s always been easy and natural. I’d argue that eating and drinking changes the state of our consciousness. So do having sex, fatiguing ourselves, and eliminating bodily waste. In doing these, our consciousness goes through a cycle of tension and relief.1 Anxiety, fear, and pain also change the state of our consciousness, but in ways that can permanently affect our perception of the world and hence our behavior. Unfortunately, that behavior often includes consuming alcohol, narcotics, and other drugs that again change our consciousness, at times with disastrous consequences.

In the modern world, bodily functions and negative stimuli still push our consciousness into different states, but they reveal next to nothing about the future of consciousness. It’s technology that most often drives changes in how we see ourselves and the world around us. It’s technology that tells us where consciousness is going. We see the truth of this observation when we consider the consequences of the Industrial Revolution and a succession of innovations — electrification, telephones, automobiles, flight, radio, television, online computing, life-extending medicine, and life-extinguishing weaponry. Each of these innovations forced an adaptation that created a novel consciousness. It’s certain that the intertwining of technology and consciousness will continue and be amplified until we reach human extinction.

Artificial intelligence is the technology destined to have the greatest impact on our consciousness. We already see its power. Last April, Chicago held a runoff election to determine its next mayor. The Democratic candidate was favored but narrowly lost. Some days before, an audio tape was circulated on which a voice, presumably the candidate’s, could be heard condoning police violence. The voice was a deepfake created by AI. Undoubtedly, a significant number of Democratic voters were disaffected. And earlier this year, sexually explicit deepfakes of Taylor Swift were posted on X (Twitter). For a short time, millions of her fans and detractors were manipulated by a lie. The science of disinformation is still in its infancy. When it matures, we’ll need a countervailing science to give truth a foothold in our lives.

Three years ago, I wrote about how AI, coupled with robotics, would cause so many displacements in the world’s labor force that all the political and economic models in use today would crash. What I want to emphasize in this post is the impact these displacements will have on our consciousness. Most of us will do little or no work. Labor has always been the chief yardstick for measuring self-worth, but no more. Our lives will be supported by the productivity of AI robots that plant, grow, reap, manufacture, package, deliver, cook, serve, clean, teach, advise, design, engineer, build, install, maintain, repair, diagnose, treat, and entertain. The search for meaning and self-respect will be more urgent than at any time in history. What will we do to escape a state of consciousness fraught with tedium and self-loathing?

It wouldn’t be surprising if religion and mysticism had a new awakening, but I think it’s equally likely that a fraction of humanity declares their desire to forge a new civilization alongside AI robots. This minority will demand to be genetically retooled to become the sensory and cognitive equals of the robots. For good measure, they’ll ask surgeons to implant neural chips to boost cognition further.2 It would make sense for the chips to empower people to communicate with their AI “colleagues” telepathically. In this brave new world, human consciousness would only distantly resemble what it is today. In fact, we probably could no longer call it “human” consciousness.3

________________________

1I expect some disagreement on my claim that these are states of consciousness. But how else should we regard a stress or tension that rises to the level of awareness and occupies our minds until relief comes? Yes, I also think that sitting in a dentist’s chair and getting a tooth drilled is a state of consciousness.

2Neuralink, a company run by Elon Musk, implanted a BCI (brain-computer interface) into a person’s brain last January. It’s fully within the brain, not on the skull, and wireless. These features reduce the risk of infection and make the implant practical for real-world use.

3My friend Gary pointed out that the advent of telepathy would create a “hive mind,” giving the enhanced humans and AI robots a collective intelligence. He thought of the Borg, the oppressive collective portrayed in Star Trek: The Next Generation. If this state of consciousness is a consequence of telepathy, it would be essential to turn it off at times. Otherwise, the pronoun “I” would disappear and with it any possibility of privacy.

A change of perspective

It looks like my father was right about UFOs. Back in the Fifties, the decade of my adolescence, a spike in UFO sightings made him a believer. “I think visitors from another planet are having a look at us,” he said with enthusiasm. Not dread or concern — enthusiasm.

As it happened, the school I attended was introducing me to science, the hardest of hard-nosed disciplines. I said, “Dad, that’s just not possible.”

My certainty startled him. “What? You doubt there’s other intelligent life in the universe?”

“No, just the opposite. The universe is so vast that it’s mathematically impossible for us to be alone.” He seemed relieved that I wasn’t an ignoramus.

“Good. And it also stands to reason that some of the sentient beings out there are below our level of development and some are beyond it. In fact, some must have technologies that far exceed ours.” I agreed, of course.

“So, their visits are possible,” he said with satisfaction.

Now I had him. “Dad, it doesn’t matter how advanced they are. There are statements in physics called laws. They are the rules that all the matter and energy in the universe obey. Einstein showed that nothing can travel faster than the speed of light — no atom, no comet, no spaceship. And if any spaceship approached the speed of light, its mass would grow to the point that any life aboard would die. But suppose a UFO somehow could travel near the speed of light. Its voyage would be doomed because the planets that orbit distant stars are so far away that the journey to Earth could take thousands or millions of years, even at that extreme speed. And how would they know the Earth was worth visiting? Even if they had a miraculous telescope that could show closeup TV pictures, the pictures would be of Earth as it was before any human had been born. They’d have no way of knowing we were here!”

Surely he could have no more to say, but he did. “What’s a law today may not be a law tomorrow. Remember, we’re imagining a civilization that may have practiced science much longer than we have. Say, a million years longer. It’s hard to believe that what limits us today will still limit us in a million years.”

He didn’t shake my skepticism, which persisted for decades. I did get a bit rattled when string theorists started talking about 11 dimensions of reality and when I began learning about the claims of quantum physics. I was fine with molecules and atoms, and even with quarks and gluons, but superposition and entanglement seemed a bit much. Next the Higgs boson came along, a particle that imparts mass to other particles. Modern physics was clearly in post-Einsteinian territory. Nevertheless, in all that new science, there was nothing to give hope to UFO enthusiasts. Not until October 19, 2017 came along.

That was an exciting day for astronomers at the Pan-STARRS1 telescope on Maui. Their mission is to track comets and asteroids in Earth’s vicinity. They thought they’d caught sight of a comet, but one unlike any that had ever been seen. It wasn’t outgassing — leaving an evaporation trail — as comets do. On the other hand, it wasn’t an asteroid because it was accelerating on its course out of our solar system. Its shape was elongated, about a quarter-mile long and a tenth as wide. No one has ever observed a celestial object with a similar aspect ratio. Its trajectory showed it had been traveling for hundreds of millions of years from one of four possible star systems. Maybe it was something manufactured, like a space probe. Holy crap! That would mean an extraterrestrial intelligence existed! In this spirit, it was named ‘Oumuamua, Hawaiian for “scout.”

Ever since, cosmologists have been torn between two camps. One holds that ‘Oumuamua is very rare but nevertheless an object found in nature. The other holds that it is alien made. Of the explanations by those in the former camp, all but one has been discarded. It holds that ‘Oumuamua is a glacier of frozen nitrogen that was flung into space when a planetoid was destroyed. Supporters point to the fact that nitrogen glaciers have been observed on Pluto. Hmm… Maybe I missed something when I read this. A dwarf planet explodes and shoots debris into the cosmos. One of them is a huge chunk of frozen nitrogen. An eon later, that improbable chunk enters our solar system and accelerates on its way out. Oddly, there’s no observable nitrogen outgassing. In fact, there’s no observable nitrogen! Laughable.

Even though ‘Oumuamua’s passage is a jaw-dropping event, reported UFOs presumably don’t arrive here after an eon of travel. I don’t claim to be a reader of alien minds, but it seems unlikely that an intelligent creature would sign up for such a voyage. So the problem of the transit time still needs to be addressed. It happens that I found a news story that does exactly that.

Einstein showed us that space and time — spacetime, as he called it — is curved by gravity. Substitute other verbs if you like — warped, bent, folded, compressed, dilated — all of them are apt descriptions of what can be done to spacetime. With this as their inspiration, a team of NASA engineers and physicists have demonstrated mathematically that if a bubble of negative vacuum energy could be formed, its attraction to the outer positive vacuum energy would create a means of propulsion by compressing the spacetime that lay ahead. Potentially, enormous distances could be traversed in very little time. The bubble would not move faster than the speed of light, but it would cause the spacetime between here and there to decrease by orders of magnitude. The NASA team plans to create such a bubble, beginning on an extremely small scale.

So I can now imagine an advanced alien civilization becoming the masters of spacetime and journeying to our world and back in much less than a lifetime. This leaves the question of how they would know we’re here. It turns out to be easy to answer. We’ve just deployed the James Webb Space Telescope (JWST), a device that can capture images of distant objects as they were when the universe was very young. The data it collects will lead to discoveries that were never before possible. For example, we’ll be able to determine whether any of a planet’s reflected light is artificial. Artificial light is a biosignature — a substance or phenomenon that indicates the presence of life. It would signal, in this case, the presence of intelligent life. We’ll also be able to detect chirality. This is the property of a molecule whose mirror image is not the same as itself. Chiral molecules are the building blocks of life, like proteins and nucleic acids. A civilization capable of intergalactic travel would surely be able to launch telescopes toward dozens of star systems. The telescopes would be superior to the JWST, probably more compact, and programmed to detect life.

The clinchers for me were an article published in the The New York Times in 2017 and a “60 Minutes” segment broadcast last year. I came upon both a few weeks ago. The Times article, “Glowing Auras and ‘Black Money’: The Pentagon’s Mysterious UFO Program,” describes how credible people have battled with the Pentagon to conduct an earnest, open, and ongoing investigation of UFO sightings. The “60 Minutes” segment contains riveting interviews and videos that lead to inescapable conclusions about UFOs. I challenge any UFO doubter to look at these and come away with their convictions intact. It seems certain to me that we’re being visited from afar. My only uncertainty is whether there are living beings aboard. It may be that the objects are probes loaded with artificial intelligence. For the most part, that is how we’ve chosen to explore our solar system.

The discoveries we’re making about extraterrestrial life will accelerate in the next decade and seem likely to grow geometrically thereafter. We know what the consequences of these discoveries will be because we’ve seen them before. Once we thought Earth was at the center of everything. Then we learned that the Sun was the center of everything, and “everything” was our solar system. For a long time, espousing this belief was heresy. Then we learned that many of the visible stars were galaxies, and the Sun was a relatively minor star in an outer arm of our galaxy, the Milky Way. Then we learned that, by the latest estimates, there are 2 trillion galaxies in the universe. Now there are mathematical models that predict the existence of multiple universes. Now astronomers tell us they’ve discovered nearly 5,000 exoplanets, and the counting has barely begun. A very small percentage of exoplanets resemble the Earth in size, mass, atmosphere, temperature, and the presence of chiral molecules. An actual twin hasn’t been found. It follows then that an intelligent life form on another planet will not look like a human being.

Throughout history, anything thought to be at the center of existence has been an ignorant guess. When it eventually sinks in that neither God nor humanity is at the center, it will be a bitch to stay grounded. To say these are unpleasant prospects is a grim understatement. The only remedy I see is to collectively forge a value system whose logic and appeal will form a new, durable center. That is likely the greatest challenge humanity will ever face — greater than controlling the climate, greater than extending longevity, greater than reaching the unreachable stars.

The magician

My instincts to the contrary, I’m becoming an optimist! What could have wrought this transformation? A person, as it happens. The name of this magician is David Deutsch. He’s a philosopher, writer, and Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation in the Clarendon Laboratory of the University of Oxford.

It’s not his thoughts on quantum physics that fascinate me. I have yet to understand them. Rather, it’s his enshrinement of knowledge as one of the most potent forces in the universe. Most scientists and laymen hold knowledge in much lower regard. Gravity, electromagnetism, radioactive decay, the binding force of subatomic particles — these are forces to reckon with! Ever since the Big Bang, they have dictated the course of the cosmos. Deutsch scoffs and says they’ve done no more than perpetuate 14 billion years of “The Great Monotony.” He says the cosmos didn’t get interesting until the genus Homo came along. It has been the only lifeform to concoct explanations for the world it experiences. The explanations, regardless of their correctness, constitute knowledge in Deutsch’s view. (Two years ago, I added the post “Stories” to this blog. “Stories” is Yuval Harari’s term for incorrect explanations. Deutsch would call them knowledge nonetheless.)

Deutsch assumes that in all explanations there will be a degree of incorrectness. We can think of them as occupying a gradient that goes from total bullshit to extreme accuracy. The explanation for any given phenomenon can move ever closer toward accuracy and presumably grow in its utility. For this to happen, an explanation must be buffeted continually by inspection and criticism. It must stand up to critical thinking. This rough treatment will result in rejection, replacement, or refinement. It will not result in final confirmation because the gauntlet never ends.

It follows that the accuracy of knowledge grows where civil liberty flourishes. The contrary is also true; if freedom of speech and press are threatened or suppressed, if dogmatism becomes virtue, accuracy regresses and darkness closes in. Will we ever reach a watershed moment when humanity invariably recognizes regression in the making and short-circuits it? I think so, but that moment is at best one to two centuries away. Until then, our grip on what is factual will remain tenuous, and the threat of extinction will hover over us.

It also follows that democratic societies have a survival advantage over autocratic ones. When critical thinking and novel ideas are prevalent, useful knowledge aggregates faster. Consequently, power does too. Unfortunately, no democratic society offers unequivocal equality of personhood and opportunity — at least none I know of. This is their Achilles heel, and they pay a high price for it.

One of Deutsch’s best observations is about memes and the role they play in Homo society. He uses “meme” in the sense that Richard Dawkins, the creator of the term, intended. That is, a transmissible bit of cultural information, in the same sense that “gene” is a transmissible unit of biological information. Memes have a role in the life of any animal capable of learning. For example, young chimps learn from older ones that they can get a snack by poking a grass stem into a termites’ nest. Similarly, human children learn that they can start a fire by rubbing two sticks together. It’s the “start a fire” meme in action. What makes humans special is their proclivity to focus on the results of a meme, ask what caused the results, guess at answers (pose explanations), and try to create the same results by other means. If successful, voila, a new meme is born! The progeny of the original stick rubbers realized they could make fire by creating heat, and there were all sorts of ways to do that. With millennia of persistence, they eventually discovered a way to burn hydrogen.

Knowledge has made us the consummate adapter. We live everywhere. We don’t need nests or burrows or caves or any other natural shelter. Our shelters are fabricated with heating and cooling systems, ready energy, running hot and cold water, entertainment centers, and waste disposal. Our existence and our knowledge are now inseparable, and they always will be. But suppose there were some way to decouple ourselves from rationally derived knowledge. We would have then discarded our sole defense against crap knowledge. It would be the start of a slow and horrible extinction.

Deutsch’s vision isn’t merely of a world where our knowledge ends disease, feeds billions, controls the climate, and finds a modus vivendi for world peace. No, these are paltry challenges. Why shouldn’t we also be extraterrestrial adapters? Why shouldn’t we terraform planets and moons, deflect asteroids and mine them, colonize the galaxy, and harness energy in ways that are as yet unfathomable? That very well could be our destiny. It’s where our alliance with knowledge points.

A penny for your thoughts

Millions of English speakers, I imagine, have made that offer at one time or another. Sometimes the person on the receiving end isn’t thinking at all. They’re just parked in neutral. They blink and say, “Oh, no … keep your penny; I’ve got nothing.” That seems plausible enough to me. I often find that my brain is uncommitted and humming idly. But I think you’ll agree, it would be quite a stretch to assert that many of us are brain-parked for most of our waking lives! That is exactly what’s claimed in some of my recent readings.

Just to be clear, this wasn’t an assertion that many spend their lives in a zombie-like trance. I don’t know anyone of that opinion, entertaining though it is. Rather, the assertion was that many of us have never developed an awareness of an inner monolog.

That idea had never crossed my mind. I’ve always assumed that everyone engaged in “self-talk.” It’s a reflex, I thought — like breathing. I was fascinated. I wondered how common it was to live with a quiet mind. And how do people with this affliction navigate life’s gauntlet without the power of reason? (Isn’t that what self-talk amounts to?) I wanted research — toot sweet.

One of the articles was a great letdown. After making the bold claim that self-talk was not universal, it backpedaled. It was as though the writer had, in mid-paragraph, imagined a mob of angry readers ready to lynch him. Psychologists were called forth to give contrary testimony. They explained that self-talk was indeed universal. It was just that many forms of it weren’t recognized as such.

For example, one psychologist explained that all memory was self-talk, as in going to the store and remembering what groceries to buy. So if Bert recalls Marge’s instructions to pick up a head of lettuce, a 6-pack of Bud, and several rolls of toilet paper, that’s somehow an inner monolog. Hardly. That’s simply an inert thought. But what if Bert had thought this: Marge wants even more toilet paper? When we have jumbo packs of it in the garage? My God, I married a hoarder. That’s genuine self-talk; it’s reflective. The tipoff is the interrogative pieces. A question puts the mind to work. Whenever you ask someone, “Why?” the mind snaps to attention; it notices a challenge has been flung.

The grandest form of self-talk is when it is externalized as a soliloquy. It would be amusing to hear Bert soliloquize about the state of his marriage, but that would amount to nothing alongside Hamlet’s contemplation of suicide or the bitter confession that begins Richard III. I wonder what people with quiet minds make of these performances. Perhaps they think Shakespeare has shown us madmen who speechify in solitude just for dramatic effect.

Another psychologist held that sharing one’s thoughts with others was self-talk. At most, that’s merely a recitation of prior self-talk, not a second instance of it. Again, it’s simply memory at work. But suppose you say something like, “Damn it, Jim, the Lakers lost again! They never win when I have tickets to a game.” That’s verbalizing something that might never have had a self-talk antecedent.

Visualizations, visceral sensations, and flares of emotion are also mistaken instances of self-talk. Our laws contradict the psychologists who think they are. If these behaviors lead to a homicide, it’s a case of second-degree murder, a “crime of passion.” First-degree murder is “premeditated,” the sole domain of self-talkers.

Happily, there are self-talkers of various stripes who live constructively. Psychotherapists, for example. I’ve personally witnessed their skill at getting clients to verbalize their thoughts. The onus of thinking about thinking becomes the client’s principal task. The success of the therapy is directly connected to breakthroughs in self-talk.

Actors are another group of self-talk experts. The best ones have a genius for probing the mechanics of people’s minds, even those who exist only in fiction. In doing so, an actor is said to “inhabit” another personality. I think this can only be done by first inhabiting a person’s self-talk, truly a miracle of empathy.

That brings us to writers. Their craft is essentially a focused application of self-talk. Only consider that self-editing is the better part of writing, and self-editing requires a perceptive and persistent inner critic, a self-talker. Have you ever wondered why school kids have so much difficulty writing a simple narrative or essay? They probably haven’t grown an inner critic and may never grow one. The main task of composition teachers is to elicit this voice. Strange though it sounds, the objective of a composition teacher and a psychotherapist is much the same.

I hypothesize that if self-talk appears, it does so at about the same time as the ability to reason, say, around the age of 7. If it doesn’t appear, why not? What would thwart its emergence? By 7, children have tried out dozens of behaviors, acceptable and unacceptable. They have blurted out dozens of thoughts, charming and rude. Adults react accordingly, and children begin to understand the value of censorship. They may make friends with it and come to regard the inner voice as risky or even dangerous. If so, the question flips — why wouldn’t every child choose censorship? That calls for more speculation. What if the dearest adults react to a child’s experimentation with reproaches that wound the tender ego? Perhaps it recoils and cries, “Hey, I’m innocent! I want an advocate, a mouthpiece, someone to plead my case,” and proceeds to generate an interior voice.

The decisive paths taken in developing brains can only be guessed at. We’ll have to wait for the distant day when research at last reveals the wellsprings of consciousness.

The immovable object and the irresistible force

Collisions are fascinating. Who knows what we might find in the fallout? At bottom, that may be why we build super colliders.

That thought led me to wonder about collisions of a different kind — collisions between ideas. Would they be as fascinating to contemplate? Ideas have an uncanny way of leading to changes in the material world, so yes, I’d say that smash-ups of abstractions can be just as mesmerizing as a destruction derby. Take, for example, the abstraction we call culture. Though not a concrete thing, it’s as mighty and immovable as Everest. True, a culture can shift in the course of millennia — perhaps even in centuries — but from the perspective of a lifetime or several generations, no. It is a wall of granite.

Now, take as the irresistible force the abstraction we call technology. In fact, let’s make a refinement. Let’s consider only two areas of technology, artificial intelligence and robotics. These two go hand in hand, so I’ll give them a name that denotes their synthesis, AIR. I contend that when AIR smashes into culture, the shock is so violent it forces culture to begin decomposing. But culture is resilient. In time, it recomposes into an entirely different form. To put it simply, AIR has the power to transform culture.

To understand the dynamics of the collision, it’s necessary to look at the constituents of culture. There is art, music, literature, and spiritualism, of course, but these are secondary characteristics. AIR affects them only indirectly. Most affected are the primary characteristics — labor, ownership, and government — and the cultural values that determine how each of them expresses itself.

In every culture, labor is and always has been the dominant institution. Disrupt labor with, say, social unrest, disease, famine, financial collapse, or natural disaster, and culture is jolted. Money for food, shelter, utilities, and mobility dries up. Life coarsens. An economic vortex emerges, with the power to bring everything to ruin. This is why most cultures have a strong work ethic. Work is revered, and the willingness to embrace it is a great virtue.

The value of ownership is almost as fundamental as the value of labor. We all become owners at birth; our bodies and our minds are ours, as is a vague potential to increase our sphere of ownership. We may discover that we own innate talents and comeliness, perhaps to a great degree. We learn that pride and self-affirmation are likely to grow as our potential swells our sphere of ownership. What a heady elixir it is to own property, a large bank account, beautiful objects, and, dare I say it, other people! Luckily, owning people pretty much solves the question of how to keep the engine of labor purring. The trouble is, there is a popular notion that slavery is morally reprehensible. So owners must settle for quasi-slavery, paying as little as the market will bear.

The value of government lies in its ability to lubricate the friction caused by the owners’ profit motive and the laborers’ grievances. When a government uses the right amount of grease, it earns the goodwill of labor (votes) and the thanks of ownership (money) to perpetuate its power. The greatest danger for government is its tendency to ally itself corruptly with ownership. This is a certain recipe for dystopia. The value of governmental detachment is paramount.

Now enter AIR. Owners will catch its scent first. They will quickly grasp the glory of owning intelligent robots. No labor costs other than routine maintenance. No unions, no work breaks, no absenteeism, no health insurance, no workplace safety suits, no sexual harassment suits. No complaints, period. Production costs will plummet; profit margins will soar. The incentive to reduce labor costs is an intoxicant that no earthly power can resist.

Labor will wake up to the power of AIR as its workplace invasion becomes ever more intrusive. I expect the AIR takeover of labor to proceed in four stages, as shown in the following table. It isn’t meant to be exhaustive but just indicative of the levels of job replaceability.

Stage 1 is already underway. AIR prognosticators think the first two stages will be complete by 2030. By then, panic will have spread throughout the labor force, and talk of economic revolution will be common. The crisis will end only when people accept the necessity of an economy that’s managed in all its aspects. Goodbye capitalism.

The transitions in Stages 3 and 4 will be less straightforward. In many cases, the first AIR machines will be assistants, not replacements. With time and considerable revision, trust in the machines will grow. Eventually, they will take over or become full partners. Partnerships are most likely with astronauts, engineers, lawyers, nurses, and teachers in Stage 3, and with artists, chemists, composers, designers, politicians, surgeons, and writers in Stage 4.

Many vocations will transition into avocations. That’s what I foresee for musicians, pilots, artists, chefs, composers, and writers. The fate of musicians may be unique. They are trained to manipulate objects that produce a range of mechanical sounds; most contemporary music is confined to these objects and human manipulation. But what of music that’s induced through electrically vibrated media? The far greater potential of this music can be realized only if machines are the musicians. Today’s trained musicians will be among tomorrow’s hobbyists.

The future of politicians is problematic. Traditionally, they have not only welcomed recognition but sought vigorously to create it. The debates that precede primary elections are a great political sideshow. Each candidate strains to create the impression that they are special. I’d prefer it if all politicians were self-effacing civil servants, but how could that ever come to pass? Voters don’t vote for people they don’t notice. What a sublime day it would be if voters voted only for high-functioning automatons. The meaning of voting would change entirely because a vote for an automaton is a vote for its programming! The values encoded in that programming would be a matter of certified public record. It’s the only way to make elections about issues rather than personalities.

The future of sex and marriage will be much more disruptive than today’s rages about homosexual marriage. What will a Southern Baptist have to say about sex with a lifeless human form? Surely that would be the vilest conceivable form of masturbation. Yet it is inevitable. The only barrier is the quality of the illusion. Once a certain threshold is surpassed, androids will take their place as prostitutes and companions. Obviously, this will have a notable effect on the birthrate.

The greatest resistance to the AIR revolution will come from pride. How humiliating to be alive and have no role in securing food, shelter, or nicities for ourselves and our loved ones. Or is it? The aristocrats of history weren’t a bit troubled by the thought. And in most cases, they had no particular ability to amuse, entertain, create, or lead. That will not be our fate when our culture is transformed. The technologies of gene editing and cybernetics will offer us endowments to amplify our experience of life and our joy in it. Before long, homo sapiens will be gone, superseded by homo cybernus.

A theme for humankind, Part 2

This is the second part of an unusual timeline. It records the moments when humankind realizes that its notion of reality is shockingly mistaken. The first part of the timeline took us to the beginning of the 20th century. Now the pace of disillusionment quickens.

1905: Bern, Switzerland

Albert Einstein, was working as a young examiner at the patent office, but his life’s preoccupation was theoretical physics. With a brilliant mind and an education in math and physics, he published speculative papers that would change the world forever.

The Special Theory of Relativity
He realized what the Michelson-Morley experiment meant: the ether, an invisible medium said to pervade all of space, did not exist and therefore could not affect the speed of light. Furthermore, light must travel through the vacuum of space at a constant speed that nothing in the universe could equal or exceed!

He saw the consequences of putting an absolute limit on speed. For instance, if a moving train shines a light ahead, the speed of the light is not increased by the speed of the train. He did “thought experiments” like this one: imagine the train passing point A, and at that moment two clocks start, one on the train and one beside an observer, at an embankment across from A. If we arbitrarily pick a point B farther down the tracks and check both clocks as the train passes it, we’ll see that less time will have elapsed on the train’s clock than on the observer’s clock! This is known as time dilation. The discrepancy gets larger as the speed of an object (like our train) increases. In our daily lives, the discrepancy is negligible, but to subatomic particles and bodies in space, the difference is considerable. For example, time passes differently on Mars than on Earth.

Einstein would say the observer and the train are in different reference frames. That is, they have different coordinates in spacetime, the 4-dimensional medium in which everything exists. The passage of time in one reference frame is relative to the passage of time in the other. The differentiating factor is speed, or velocity, as physicists call it. And there’s yet more to it. Velocity affects size as well. Specifically, if Star Ship 1 is moving faster than Star Ship 2, an observer in Star Ship 2 will see that Star Ship 1 has become somewhat smaller! This is known as length contraction.

These speculations, taken together, are Einstein’s Special Theory of Relativity. “Special” because objects are assumed to move in a straight line at a constant velocity, which is a special case of motion. He would have a great deal more to say about relativity later.

Energy and mass
It was well known that if you applied energy to a moving object, its velocity would increase. However, continuing to add energy became a less and less effective way to produce this effect. Einstein realized that as energy was added, some of it converted into mass; that is, the moving object became heavier! Energy became matter! He calculated that the conversion could be expressed as m=e/c2, where m is the added mass, e is the added energy, and c2 is the speed of light squared. (Commonly, the equation is written as e=mc2, of course.)

1915: Berlin, Germany

Einstein had long been uneasy about Newton’s use of gravitational acceleration in calculating the orbits of the planets. His equations didn’t work for Mercury, the closest planet to the Sun. What’s more, he claimed that if the Sun vaporized, all the planets would immediately fly off into space. But “immediately” is impossible because the gravity waves that brought news of this catastrophe would have to travel faster that light! Worst of all, Newton had no idea what caused the phenomenon of gravity. Why did it weaken as the mass of respective bodies decreased or as the distance between them increased? He asked his readers to conjecture for themselves.

Einstein theorized that mass distorts spacetime, much as a bowling ball would bow the surface of a rubber sheet. The effect is what we call gravity. In a gravitational field, the spatial dimensions of spacetime warp and time dilates. To illustrate, suppose we visit the Empire State Building with clocks set to the same time. If I stay on the ground floor and you take the elevator to the top, where the Earth’s gravity is weaker, your clock will show more time on it than mine when you return.

Gravity also bends the path of light, even though light has no mass. It must be so because light travels through spacetime, and spacetime itself is bent. This was confirmed by the total solar eclipse of 1919, and Einstein became world famous.

1923: Mount Wilson Observatory, California

Edwin Hubble, an American astronomer, discovered that many objects thought to be nebulae, clouds of gas and dust, were in fact galaxies beyond the Milky Way. So our grandparents and great-grandparents were the first humans to experience the shock that the Milky Way was not the entire universe — not by a long shot.

Hubble also confirmed the belief that galaxies were not only moving farther away from Earth, but were doing so at velocities that increased proportionately to their distance. This phenomenon came to be called “Hubble’s Law.”

The picture at the left was taken 67 years later by the Hubble Space Telescope. Every object in it is a galaxy! Today, astrophysicists estimate there are at least 2 trillion galaxies in the observable universe.

1925: Leuven, Belgium

Georges LeMaitre was a Catholic priest, astronomer, and professor of physics rolled into one. In the magazine Nature, he proposed that the universe had been created by an exploding “Primeval Atom,” which he late referred to as the “Cosmic Egg.” This is what we now call the “Big Bang Theory.” The term was coined by another astronomer, Fred Hoyle, who used it derisively.

LeMaitre was first to calculate the universe’s rate of expansion, a number that unfairly came to be known as “Hubble’s Constant.”

1927: Copenhagen, Denmark

For nearly 30 years, a new kind of physics had been developing. Its focus was quantifying the properties of subatomic particles. The act of assigning a value to a property was quantizing it. The practice of quantizing properties was called quantum mechanics or quantum physics.

Quantum physicists were active experimenters. They soon discovered oddities that were a surprising departure from what classical physics had led them to expect. The double-slit experiment is a famous example of their work. It’s explained in Part 1 and Part 2 of a helpful video.

From this and other experiments, Werner Heisenberg concluded that certain pairs of subatomic properties — for example, the position and momentum of an electron — could not be measured simultaneously, and only to a degree of probability. This observation is known as the Heisenberg Uncertainty Principle. It tells me something unsettling: I can describe a chair or a lamp to a fare-the-well, but when I try to describe their subatomic components, I can only make guesses — even if I’m a Nobelist in physics. That turns the foundations of reality into fuzz.

Note: For more about the oddities of quantum physics, see my post “The Final Frontier.”

1928: London, England

Paul Dirac was interested in combining quantum physics with special relativity. To that end, he published an equation that described the behavior of an electron traveling in a relativistic context. To his surprise, the equation had two solutions. One assigned a positive energy to the electron; the other assigned a negative energy to it!

Dirac proposed a way both results could be true. For every kind of particle with positive energy, there must exist a corresponding particle with negative energy, and vice versa. The opposites of the particles in the atom were antimatter. His conclusion was confirmed four years later when Carl David Anderson discovered the positron, the positive counterpart of an electron, in cosmic rays.

When matter and antimatter meet, they annihilate each other and produce gamma rays. If 1 kilogram of one should meet with 1 kilogram of the other, the gamma radiation released would melt most of Mount Everest. We needn’t worry because not long after the Big Bang, antimatter became rare. Nevertheless, physicists don’t rule out the possibility that antimatter might be more common in other galaxies.

1938: Berlin, Germany

For many decades, physicists had known about nuclear decay, the tendency of unstable nuclei to spontaneously emit particles. In doing so, an atom of one element could become an atom of a lighter element. However, no one had thought that a nucleus might be split and thereby produce nuclei of much lighter elements. To do so would be a case of nuclear fission.

Otto Hahn repeatedly bombarded uranium atoms with neutrons and examined the results. By doing so, he created transuranium elements — elements with a greater atomic number than uranium. Then a colleague suggested that by modifying the experiment, he might be able to split uranium atoms. He took her advice and discovered, to his astonishment, he could produce barium, cerium, and lanthanum from uranium. Modern day alchemy!

The upshot was profoundly consequential in opposite ways: the building of horrific nuclear weapons and the building of reactors that produce unlimited cheap energy.

1948: Ithaca, New York

Richard Feynman and his colleagues, while working at Cornell, developed the theory of quantum electrodynamics. They argued that the notion of an electromagnetic field wasn’t needed to explain electromagnetic force. Instead, the force could be propagated by a virtual photon — a short-lived particle of light — being absorbed by an electron, then emitted by the same electron, then absorbed by another electron, and so on ad infinitum. (Warning: Paranoids should read no further!)

The virtual proton is a messenger particle. It transfers momentum to any electron than absorbs it and so changes the electron’s direction. It belongs to a class of particles called gauge bosons. A fascinating question is, what else, besides momentum, do gauge bosons communicate from particle to particle? Do they function as a kind of subatomic hotline? The question seems pertinent given the mystery of quantum entanglement.

1957: Urbana-Champaign, Illinois

A team of physicists — John Bardeen, Leon Cooper, and John Schrieffer — investigated a phenomenon known as the Meisner effect: when an electrical conductor is cooled, its resistance to an electrical current decreases, and at a temperature somewhat above absolute zero, the resistance disappears. The conductor becomes a superconductor, which means no electrical energy is lost as light or heat. Furthermore, the phenomenon of quantum locking occurs. Namely, if a superconductor is placed somewhat above or below a magnet it will hover in space as if locked above or below the magnet. If a superconductor is started forward along a closed magnetic track, it will continually travel around the track without any loss of energy!

The BCS team — the initials of the physicists’ last names — theorized that superconductivity was caused by the way pairs of electrons interacted with the lattice of positively charged atoms in the frozen conductor.

A new age of super-efficient electricity awaits the discovery of materials that can become superconductors at warmer temperatures. Room-temperature superconductivity is now a kind of Holy Grail.

1965: London, England

After earning a Ph.D. in algebraic geometry from Cambridge, Roger Penrose began to turn his attention to astrophysics. He was drawn to a possibility that Einstein foresaw: bodies of enormous density that were formed by the collapse of massive stars. Their gravitation would be so powerful, not even light could escape, making the bodies invisible. Their presence could only be inferred by a whirlpool of visible matter spinning around them. They came to be known as black holes.

Penrose was soon joined in his studies by his friend Stephen Hawking, a theoretical physicist from Cambridge. Hawking theorized that at the center of a black hole was a singularity, an infinitely small space of infinite density, infinite gravity, and infinite spacetime curvature. At a singularity, the known laws of physics would cease to operate.

A black hole is bounded by its event horizon. Anything below it is inevitably drawn in and never seen again. Anything beyond it can escape given a sufficient velocity.

At an event horizon, just as anywhere else in spacetime, pairs of virtual particles, matter and antimatter, pop into existence. They usually annihilate each other in short order, but not at an event horizon. The antimatter particle falls into the black hole, and annihilates a minuscule part of the mass there; the other particle escapes as radiation. As eons pass, the antiparticles take a critical toll. The black hole, very much reduced in mass, explodes.

Note: Black holes are critical to the stability of galaxies. A giant one is likely to be at a galaxy’s center. When it deteriorates, what happens to the galaxy? I haven’t found a speculation about this question.

1979: Ithaca, New York

Alan Guth theorized that an instant after the Big Bang, the infant universe expanded at an extraordinary rate — exponentially faster than the speed of light — for less than a trillionth of a second. This was sufficient time for its size to grow from infinitesimal to the size of a grapefruit. Theoretical physicists call this episode inflation. The universe has continued to expand, but at a considerably slower rate.

A minority of physicists disbelieve Guth’s theory, but there are three strong reasons to think it’s true:

  • If we look at distant regions of the universe that lie in different directions, we see their properties are largely homogeneous. For example, their temperatures are practically the same. This could only be true if the regions were in a single homogeneous space immediately after the Big Bang and then accelerated apart so quickly that gravity had very little chance to affect them.
  • By combining astronomical observations with advanced geometry, we know the universe is practically flat, like a discus. This would occur if the universe began that way and then expanded quickly. A slower expansion would have produced a rounder universe.
  • Particle physics predicts that the universe should contain magnetic particles with only one pole. However, no such particle has ever been observed. Physicists who endorse the inflation theory claim these particles could not form in the post-inflation universe.

1980: Washington, D.C.

At the Carnegie Institution for Science, Vera Rubin was trying to solve the galaxy rotation problem. Astronomers had earlier discovered that spiral galaxies, given their mass distributions, were rotating much faster than gravity should allow. As one writer put it, they should be “flying apart like a smoothie in a lidless blender.” Rubin offered a plausible explanation for the stability of the galaxies: they must contain a great deal of extra mass that doesn’t emit or reflect light or produce detectable particles. She called the extra mass dark matter.

According to recent calculations, dark matter makes up almost 27% of all the matter and energy in the universe. Only 5% is visible matter. Imagine, what we see through our most powerful telescopes is only 1/20 of the pie! Until we know what dark matter is, we can regard our knowledge of the universe as minuscule.

1995: Los Angeles, California

Somewhat less than a century ago, physicists discovered that relativity and quantum physics didn’t harmonize — the latter could not account for Einstein’s notion of gravity. They set about to find an idea that would reconcile the two. The idea would triumphantly be called “The Theory of Everything.”

In the late 60s, physicists thought they were on to something. They said subatomic particles were actually one-dimensional, vibrating strings. Particles of the same kind were simply strings vibrating at the same frequency. A string could look like a line (open-ended) or a loop (closed-ended). Naturally, the physicists called these ideas string theory.

This was the first incarnation of the theory, and its problems were soon recognized. One was its requirement for a particle that had no mass and two units of spin (a property no physicist can coherently explain). Such a particle had never been identified. This difficulty was handled neatly in the second incarnation called superstring theory. Some physicists claimed boldly that the massless, two-spin particle was a graviton, a quantum of gravity! In a stroke, relativity and quantum physics were joined under one roof.

By 1995, there were five different mathematical models of superstring theory. How could anyone choose one over another? A way out was proposed by Edward Witten, a mathematical physicist. He pointed out many similarities in the theories and described a way to collapse them into a single theory. Three ideas had to be agreed to, however. One, spacetime had to consist of 11 dimensions because the math said so. (This was one dimension more than superstring theory allowed for.) Two, besides strings, there were multidimensional structures called membranes, or branes for short. Our universe was bounded by a 3-dimensional brane. Three, our universe was one of many universes (a multiverse), in which the laws of physics could differ from one universe to another. Witten called his proposal M-theory. Ever since, M-theory has dominated theoretical physics.

You ask, how can reality possibly be 11-dimensional? Where are the other 7 dimensions? The answer is bizarre. The other seven are so small they’re not directly detectable, and they’re folded together into a strange shape called a Calibi-Yau manifold! It’s been proven mathematically that such a figure could exist. A rendering of it appears at the left.

But what would these dimensions be for? Well, maybe one for time backwards and one for time forwards. Maybe another for time travel to other universes. Maybe another relates to wormholes, theoretical shortcuts for traveling within our universe and between universes.

Why don’t humans have the sensory equipment to be aware of the extra dimensions? Because extra senses were unnecessary. Darwin showed us that life develops whatever senses it needs to exploit a given ecological opportunity. Perhaps Earth provides no ecological niche that requires more senses. Here’s a video that’s probably more convincing than I am.

Note: I believe that dark matter, introduced earlier, is actually hiding in one of the M-theory dimensions. One day, this conjecture will earn me a Nobel Prize. If I’m honored posthumously, I hereby appoint my son, Joshua, to accept the prize on my behalf.

1998: California, Massachusetts, and Northern Chile

Three teams of astrophysicists coordinated an investigation of the expansion of the universe and its causes. They looked for a special type of supernova known to have an intrinsic brightness of a standard candle, the basic unit of brightness measurement. When they compared this brightness to the supernova’s luminosity — the amount of energy it emits per unit of time — they were able to compute how far away it was. To measure how fast it was receding from us, they measured how much its light was shifted toward the red end of the spectrum. To their surprise, they found that the visible universe was not expanding at a fixed rate but at an accelerating rate!

But why was expansion accelerating? Where did the energy come from to cause this phenomenon? Unlike heat or light or any other kind of energy, it was undetectable. So it was given the name dark energy.

Eventually, quantum physicists identified the most plausible source of dark energy: the vacuum of space! It seems the vacuum of space isn’t really a vacuum. In any cubic centimeter of this space, you’ll find pairs of virtual particles, matter and antimatter, continually popping into existence and then annihilating each other. (You’ve met virtual particles before in the discussion of black holes.)

The here-and-gone character of virtual particles creates energy called vacuum energy. This is the mysterious dark energy! It doesn’t amount to much in one cubic centimeter, but spacetime is incredibly vast, so the accumulation of dark energy becomes a remarkably powerful force. Powerful enough to drive the expansion of the universe.

As the universe expands, the vacuum of space gets bigger. Consequently, dark energy increases, and naturally, the expansion keeps accelerating. Dark energy is currently calculated to be 68% of all the matter and energy in the universe. At 5%, the stars and everything star-made are trivial by comparison.

2012: near Geneva, Switzerland

Three years earlier, the Large Hadron Collider went into operation. It’s the most powerful particle collider in the world, and the largest machine in the world. Its first mission was to search for the Higgs boson, a hypothetical particle that Peter Higgs and his colleagues described in 1964. Their hypothesis depended on the existence of quantum fields.

Quantum field theory holds that every kind of particle has a corresponding field, a ubiquitous part of spacetime in which excitations of energy appear to us as the particle; for example, the electron field contains excitations we perceive as electrons. Higgs believed the Higgs field was home to Higgs bosons.

We know that some particles have mass (“stuff”) — electrons, for instance. Some particles have no mass — photons, for instance. How do the particles with mass acquire their mass? Higgs said it happens when the field for those particles interacts with the Higgs field and its bosons. Metaphorically, it was like God touching Adam’s finger. Thus the Higgs boson got the nickname “God particle.”

When the Large Hadron Collider had been fine tuned and fully powered, its physicists found a particle that had all the characteristics a Higgs boson would have. Today, physicists are in accord that the Higgs boson has been discovered.

2015: Washington, Louisiana, and Pisa, Italy

Of the forces we know about — the strong nuclear force (it binds nuclei), the weak nuclear force (it causes particle decay in nuclei), the electromagnetic force, and gravity — gravity is by far the weakest. Try to pick up a paper clip with a cheap magnet. The magnet will easily overcome Earth’s gravitational field. No contest. But even so, gravity has the power to shape spacetime.

In his Theory of General Relativity, Einstein predicted the existence of gravitational waves that ripple through spacetime much as a backwash ripples through water in the wake of a boat. Gravitational waves, theoretically, would travel at the speed of light and exert a push-pull on any objects they passed. But because they are weak and invisible, they would be extremely hard to find.

A century after Einstein’s prediction, astrophysicists, working at three different observatories, set out to discover gravitational waves. Most observatories look for light waves that have traveled from afar. But gravitational-wave observatories are built to detect the push-pull of gravitational waves traveling from afar. The astrophysicists trained their instruments on two black holes that were on a collision course 1.3 billion years ago. In 2015, the gravitational waves from their massive collision reached Earth. The sensitive instruments at the observatories independently detected the push-pull exactly as predicted.

Scientists are hopeful that gravitational waves will become, as light waves have been, a rich source of information about the universe.

Adults living in the year 1900 had a vastly different notion of reality than we do today. Atoms were solid, like tiny muffins with embedded fruit. Time passed at the same rate everywhere. An object always had the same apparent size, regardless of its motion relative to an observer. There were no more than three dimensions, period. And gravity had no effect on them. People didn’t age more slowly in a valley than on a mountain top. We lived in a star collection called the Milky Way, which was also the entire universe. The universe began when God created it and has remained unchanged. Nothing about the subatomic world would be surprising; matter was matter. Subatomic particles didn’t have dangerous twins that could cause mutual annihilation; God would never permit His creation to be destroyed. One element could never be turned into another; that was a discredited medieval belief. Subatomic particles couldn’t communicate across distances; could a paperweight in London talk to a paperweight in Hong Kong? The frightening possibility of dark, powerful sinkholes in the heavens was undreamt of. Dropped objects always fell to the ground; they never hovered over it. Anything in nature could be detected by our senses alone or with the assistance of a device. There was only one Creation; imagining an infinite number of them would be heretical. God created the universe, meaning light and all the material things; only God could make “stuff” out of nothing. Gravity couldn’t travel billions of miles through space like a light beam; all the stuff in the universe would get smushed together.

That was a time of certainty. Today is different. We try to learn about fundamental bits of nature by examining them with light or with some other form of energy. This very act changes the properties of those bits. Our best current guess is that reality isn’t composed of bits at all, but of energy excitations in quantum fields.

One more thing. You’ve made it all the way through a long, difficult post. You deserve a reward. For your titillation, here’s my favorite science video. Enjoy!

A theme for humankind, Part 1

Here’s a challenge: Think about the literature you know and look for a quote you’d nominate as a theme for humankind. No rush, take your time.

The quote I find most apt for this assignment is Hamlet’s assertion at the end of Act I: There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.

Hamlet had just had a chilling conversation with his father’s ghost. The shocking cause of his father’s death had been revealed. The horrible cost paid by sinners had been glimpsed. Horatio, a scholar and Hamlet’s closest friend, knew only what an educated 14th century Dane would know of the afterlife. Hence Hamlet’s assertion.

To me, Horatio is everyman, at any point in human history. Pick any time and place, and the people there will claim they know the cosmos in two different ways, from experience and from intuition. Of course, intuitive knowledge is no knowledge at all. In every age, intuitive knowledge is no more than the dreams in the philosophies of its Horatios.

Every so often, a fever of rationalism arises. The truth of what we intuit is challenged. My favorite example of this is Plato’s Allegory of the Cave. In it, people (humankind) live in a cave, in shackles. They can face only the back wall of the cave. Light comes from a fire behind them and from the mouth of the cave, so shadows constantly play on the wall they face. They take the shadows to be reality. One day, one of them breaks free of his shackles. With difficulty, he stumbles toward the cave opening and into the intense light. He gradually deduces that the colorful 3-dimensional objects outside the cave are the causes of the 2-dimensional shadows within. He has discovered a greater reality! He goes back in to relate these marvels to his colleagues, and they scorn him as a madman.

To my knowledge, this story is the first nonreligious account of a reality beyond our senses. Plato believed that even 3-dimensional objects were merely representations of ideal “forms” of the objects. He thought he had reasoned his way toward ultimate truth. Of course, he was wrong, but what is worse, he misled Aristotle, his pupil, about the reliability of reason alone. So it was that a curse fell over Western civilization: the curse of Aristotelian knowledge and holy scripture.

For the next fever of rationalism, let’s jump forward to the Renaissance — mainly because European history is the history I know best. The Renaissance encompasses much of the Age of Exploration, a foreshadowing of the Scientific Revolution, and an efflorescence in the visual arts and literature. That brings us back to Shakespeare, an irresistible reference point.

Within Shakespeare’s lifetime:

  • The approximate shapes and sizes of the continents, save for Antarctica, had been mapped.
  • Galileo saw four of Jupiter’s moons and the rings of Saturn with a new tool, the telescope. He confirmed Copernicus’ heliocentric model of the “universe.”

In the century of Shakespeare’s death:

  • Newton described the laws of motion and a force of attraction he called “gravity.” He explained that the tangential velocity of the planets prevents gravity from plunging them into the Sun. In essence, the planets fall around it, as does the Moon vis-a-vis Earth.
  • Leeuwenhoek observed and described microscopic protozoa and bacteria.

The stage was set. We at last had a scientific discipline and the tools to pursue things profoundly large and profoundly small.

The first steps in the pursuit of the profoundly small began with two naturally occurring forces that had been known since ancient times: magnetism and electricity. What were they, really? In the case of magnetism, certain metals were drawn toward lodestone, or magnetite. Electricity behaved similarly. Lightning bolts were drawn from the heavens toward Earth, and sparks could jump from one surface to another when there was friction between them.

In 1820, Hans Christian Ørsted showed that an electric current created a magnetic field around the conductor. A decade later, Michael Faraday showed that moving a magnetic field inside a coil of wire created an electric current. Electricity and magnetism were two faces of the same force, electromagnetism. Once we understood how to use a magnet to induce an electric current, it was just a matter of time before the arrival of power plants and generated electricity.

Questions remained, of course. If electricity is a flow of electrons (negatively charged particles), what sorts of objects held them to begin with? The answer of choice was atoms, the fundamental particles that Democritus had imagined two millennia before. It was the choice of J. J. Thompson, the English physicist who believed that electrons were like raisins in a plum pudding. The “pudding” part of the atom had to be positively charged to hold the electrons. Overall, atoms would have a neutral charge.

But all this seemed a bit fantastic. We could imagine a flow of electrons in a lightning bolt and in a wire, but what of atoms? They offered no visible evidence of themselves.

Enter Einstein, though not in the way you might imagine. He was familiar with the phenomenon called Brownian motion, the behavior of visible particles — say, pollen grains — immersed in a liquid. They moved constantly and randomly; therefore, something invisible must be buffeting them constantly and randomly. By using a microscope, a stopwatch, and some elegant math, Einstein was able to measure the size of the colliding atoms (or molecules) and the force they exerted on the visible particles. Voila! — the existence of invisible atoms and molecules was proven.

So far, we’ve just pierced the 20th century, a good place to take a breather. The discoveries I’ve reviewed have long lost their power to shock. Today, only children are astonished. But in posts to come, we’ll look at the remaining century plus, and unless I miss my guess, many of you will be as stunned as Hamlet was when he encountered his father’s ghost.

Start with stars

The study of science should begin with the study of stars. They are the magical progenitors of everything we know, the first in a sequence that begins with physics and leads to chemistry, biology, evolution, and civilization. At the same time, they are a metaphor for deep mystery, for the spellbinding unknown.

In a star, the outward energy released by fusion in its core is pitted against the inward force of gravity. So long as the core is fusing hydrogen into helium, the star is said to be in the main sequence. What happens when all the hydrogen is fused depends on how massive the star is.

We didn’t understand fusion in stars—the cause of starshine—until the 1930’s. Only then did we resolve questions about the sources of light and heat that were first posed by the ancient Greeks. (They also speculated about star clusters, particularly the Milky Way. The Greek for “milky” is galaxias.)

The past hundred years have seen giant strides in our knowledge of stars. We’ve learned:

  • The more massive a star is, the more luminous (intrinsically bright) it is, and vice versa.
  •  The distance of a star can be calculated by measuring its parallax as the Earth orbits and by timing the brightness pulsations in Cepheid variables and supernovas.
  • There are galaxies other than ours—Hubble’s stellar discovery. We now estimate their number at 100 to 200 billion in the observable universe.
  • Stars have a life cycle. The more massive the star, the shorter its time in the main sequence.
  • Stars of small or average mass expand into red giants when they exit the main sequence. They envelop nearby planets, as our sun will in 5 billion years or so. Finally, they shrink into white dwarfs.
  • More massive stars may become red supergiants, then supernovas. The outer layers of supernovas give birth to new stars; the remains are almost entirely neutrons—namely, neutron stars.
  • The most massive stars implode and become black holes. Most galaxies have a supermassive black hole at their centers.

When I think about this list—and it’s far from exhaustive—I imagine the night sky as one colossal crime scene investigation. What is visible, and what leaves invisible traces, is the ever-shifting evidence of an almost 14-trillion-year-old “crime,” the Big Bang. (It would be thrilling to view a TV series titled “CSI: Stars.”) Yet despite our accumulation of knowledge, despite all the evidence and retrograde analysis, the most profound mysteries remain. For example, the inescapable gravity of a black hole must mean it contains no atoms. There must be nothing inside but a stew of subatomic particles. But if a cosmic “seed,” something unimaginably compact, produced the entire universe, what kind of matter could it have possibly contained? And why did it go bang? Black holes never go bang, even though their gravitational strength is inestimably weaker. Clearly, we’re dealing with a form of matter/energy that has no counterpart in what we’ve detected in the universe, not even in our most advanced supercolliders.

Another example: Galaxies rotate at a much faster rate than should be produced by the collective mass of visible gas and stars. To account for the actual rotational rate, we’d have to multiply the collective visible mass by 10! Most scientists conclude the missing 90% is hidden in dark matter. You might ask, “If galaxies have all that extra mass, why are they flying apart from one another at an accelerating rate?” The answer, I’ve read, is dark energy. What do you think of the practice of labeling something “dark” to balance the books?

There is a great deal more in this crime scene to investigate, and we may never crack the case. But that’s another reason why science must start with stars: they engage our intellect and wonderment more than any other branch of scientific inquiry.

Change

headacheI recently came across this provocative quote by Mary Shelley:

Nothing is so painful to the human mind as a great and sudden change.

The word “sudden” tells me that Ms. Shelley was probably thinking of an immediate blow, like divorce, financial ruin, or the death of a loved one. These are terribly painful, of course, but a change doesn’t have to be sudden to cause great anguish. One that plays out over decades, gradually disrupting behavioral norms and eroding social institutions, is painful in a more subtle way, in an unrelentingly stressful way.

A little more than two centuries ago, a novel kind of change—slow-acting and stressful—began to draw notice. Unlike the misfortunes that had befallen people for millennia, it wasn’t periodic. It didn’t come and go; it simply lingered and grew in strength. More and more, it became the fabric of our experience. I’m thinking of technology, the umbrella term that now encompasses aeronautics, nuclear energy, renewable energy, computing, computer networking, microelectronics, wireless communication, digital photography, space travel, biotechnology, genetic engineering, life extension, smart homes, robotics, and many other applied sciences that don’t come to mind just now.

To be clear, I’m not saying that technology is unwelcome or inherently harmful. For one thing, it may one day completely eliminate drudgery. It already extends lives and will extend them further. It ensures billions are fed who would otherwise starve. It lets us communicate around the world in seconds, with sound, video, and text.  It daily widens our knowledge base. It keeps us entertained. It opens dazzling insights into our existence that surpass anything ever imagined by religious zealots.

But… technology is disruptive. It creates social upheaval; it puts our lives on a low boil. For example, enhancements in communication and transportation have made it feasible to globalize manufacturing and business services, at the cost of millions of jobs. Robotics will cost tens of millions more, and no monetary policy or fiscal stimulus will make the least difference.

Our population is overweighted toward seniors, with an ever-sparser population of workers supporting an ever-denser population of retirees. As longevity increases, the tension between the young and old will turn into outright belligerence.

The rich have far better access to medical care than those of modest means. Imagine the outrage when “miracle” therapies become the exclusive property of people with deep pockets.

People are dividing into hostile political factions as social media erode journalistic standards. Our understanding of what is fact will soon be anchored to nothing.

Every breakthrough in technology is weaponized. New WMDs are always on the horizon. Warfare has to be redefined with each passing decade. “Containment” is now an essential branch of foreign policy, and preemptive strikes look more and more attractive.

People respond to change in one of two ways. Some, whom I’ll call “reactors,” look to authority for guidance. In religion, they are fundamentalists: a holy book shows them the right path, now and forever. In American politics, they are strict constructionists: the Constitution, if interpreted narrowly, shows politicians and jurists the right path, now and forever. Never mind that the roots of Judeo-Christian scripture go back 3,000 years. Never mind that the vision of our founders was codified in the early years of the Industrial Revolution. Reactors are, well… reactionary. Their model of society is based on something in the past that they imagine to be forever durable and true. “America First” and “Make America Great Again” are slogans that speak to their hearts.

Other people, whom I’ll call “pragmatists,” respond to change by looking for ways to manage it. They are willing to bend in the direction the wind is blowing and reshape institutions to accommodate new realities. For example, with the workforce growing smaller and smaller, they might favor a guaranteed minimum income for everyone. It would be an affordable solution because the productivity of automated labor would be exceptionally high. Income would have no bearing on the access to leading-edge medical therapies because universal medical care would finally be recognized as a right. Fact-checking would no longer be a sometimes thing but become an actual profession; funding could come from the media being policed. Existential threats wouldn’t be answered by calls to patriotism but by a consensus among peoples that collective security is a greater, more rational imperative than love of country.

Which of these responses will predominate? I think neither. Rather, I see them as alternating waves, one waxing and then waning as the other becomes popular. Currently, the reactors are in charge, but in the foreseeable future, the pragmatists will have their time, and so on. Of course, this oscillation can’t go on indefinitely. In effect, it’s just running in place. Our problem solving will be absurdly inefficient. Sooner or later, a breaking point will come. Technological change will simply move beyond our ability to cope.

The final frontier

I was born in the middle of WWII, so most of my schooling was in the 50’s. That was the first full decade of the Cold War and a very scary time indeed. In the classroom, we occasionally had “drop” exercises. We’d drop to the floor and nestle under our desks, giving us, absurdly, the illusion of protection from a nuclear blast.

strangeloveDuring that era, a sense of urgency arose about any technological gap with the Soviet Union. To no avail, we hoped to preserve the gap in nuclear weaponry. Then came the missile gap and, when Sputnik went up, the space race followed. We were deeply unnerved. Rather than allow the USSR to “bury us”—Khrushchev’s words—we spent billions in pursuit of supremacy. This mania was mockingly portrayed at the conclusion of Dr. Strangelove: the end of human life was at hand, and only deep tunnels could preserve a select few, the seeds of the human future. It became obvious to those in the War Room that digging must commence immediately, or there could well be a “mineshaft gap”!

By the end of the 60’s, we had landed on the moon, and two decades later, the Soviet Union began to crumble. Seemingly, there was no gap left to fear. A communications revolution ensued. All sorts of institutions—governmental, corporate, military—and homes around the world began interlinking via private computer networks and the Internet. From all walks of life, people hailed these connections as a sign of utopian change. A boom decade was underway.

Enter the hackers, warped but clever individuals motivated by malice, They happily disrupted computer systems and corrupted data files for the thrill of it. We answered their “viruses” with “antiviruses,” but they were always one step ahead. As computer programs and networks became larger and richer in function, new security holes permitted new viruses to gain entrance. Even so, I thought of hackers as pests we could keep at bay, so long as we, the users, took a few intelligent precautions. I was wrong.

Before long, it dawned on criminals and hostile governments that hacking for the sheer evil pleasure of it was a waste. There was real money to be made by finding credit card numbers in customer databases. There were personnel records in employee databases that could be mined for intelligence information. There were secrets in private email that could be read and leaked to manipulate public opinion. We’ve already seen instances of all three. Most recently, Russia, behind the facade of Wikileaks, has published the emails of Hillary Clinton’s associates in an effort to sway the outcome of our presidential election. Can it get any worse? Yes indeed. There’s the specter of a hostile government hacking into the computer networks at the heart of our communication, energy, and military infrastructures. There’s a word for such a catastrophe: cyberwar.

Our government has awakened to the reality that national security is computer security. To achieve it, an unprecedented leap in computing speed and data storage is needed. Why these two? To start with, such computer systems could contain biometric information on every consumer and government employee in the nation. They could retrieve such information in a flash and even pose questions to further confirm a user’s identity. Just as important, hostile governments would recognize the power of these systems and know their speed and algorithms could readily break through the security defenses of less powerful systems.

Several years ago, an idea was put forward on how we might make such a leap in computing power. The idea was based on quantum mechanics, a branch of physics that’s no more than a century old. It happens, much to the consternation of physicists, that the physics of objects we can see, whether they’re as big as a star or as small as a grain of sand, is different from the physics of objects we can’t see—subatomic objects, known as quanta.

When we study quanta, we find some confounding surprises. Here’s a layman’s summary of them:

  • Quanta sometimes act like particles—like electrons, protons, neutrons, quarks, and so on—and sometimes like waves. When we try to detect the location of a quantum (the singular form of the word), it always behaves like a particle. We never know precisely where the particle is. We only know that its location has a probability—say, 20%—of being at point A and another probability—80%—of being at point B.
  • When we measure a quantum to determine its state—say, its energy or position—we can, at that moment, get a single result. But between measurements, the quantum is simultaneously in two states, each with a value expressed as a probability. The phenomenon of existing in two states at once is known as superposition.
  • When two quanta interact in a certain way, their states become interdependent. When the state of one is measured, the state of the other is known immediately, no matter how far apart they are! One quantum can be in Pittsburgh, and the other can be in Beijing, or even on Jupiter. They seem to communicate instantaneously. This phenomenon is known as entanglement.
  • A quantum traveling from point A to point B will take every possible route at the same time! (Please don’t ask me what makes a route “possible.”)

You may be thinking, OK, quantum mechanics is weird, but what possible application could it have in computing? Well, think of an irreducible unit of data in a traditional computer, a bit, short for “binary digit.” A bit can be in one of two states, 1 or 0, or if you like, on or off. In a theoretical quantum computer, a bit (renamed a qubit) can be a 1, a 0, or both at the same time—an instance of superposition. That’s 50% more information in every data unit. Further, all computer operations can be done in parallel, enhancing speed enormously. (I’m guessing this achievement is due to entanglement, but I’ve yet to read anything confirming it.)

You’re right if you’re thinking a quantum computer would be difficult to build. One exists, but it contains only a small number of qubits; it does little more than model the idea of quantum computing. In Australia, some researchers based in Sydney have a $33 million grant to create a sizable quantum computer. In the U.S., Google is trying to do the same. They have a project underway at an undisclosed location in California. They refuse to reveal much more than that. So far as I know, our government is funding no such project. If they are, the budget for it is hidden somewhere.

It’s imperative for our government to form the same kind of partnership with business as it did in putting a man on the moon. Our national security is at stake. And the side benefits will be far greater than those achieved by the Apollo program. A quantum computer would be able to do grammatically perfect and idiomatically correct translations simultaneously, in any number of languages. No more translators at the U.N. Weather forecasting would be 100% accurate up to two or three weeks ahead, maybe more. No more wondering what path a hurricane will take. At the CIA, million of pages of data would be analyzed in mere hours. Recommended courses of action and the probabilities of success would be available to our leaders. In medicine, incorrect diagnoses would be a thing of the past. The best plans for treatment would be at a doctor’s fingertips.

Escalation of tension with Russia (and perhaps China) is not an easy thing to advocate, especially by one who has lived through the worst years of the Cold War, but I believe we have no choice. The benefits far outweigh the risks. To those who would say that medical research or space exploration should be our foremost scientific priority, I say, neither is our final frontier—computing is.