Light (aka electromagnetic radiation) is responsible for most of what we know about the universe. By measuring photons of various frequencies in different ways -- "the careful collection of ancient light" -- we've painted a picture of our endless living space. But light isn't perfect. It can bend, scatter, and be blocked. Changes in gravity are more difficult to detect, but new instruments may allow scientists to construct a different map of the universe and its beginnings.
LIGO works by shooting laser beams down two perpendicular arms and measuring the difference in length between them-a strategy known as laser interferometry. If a sufficiently large gravitational wave comes by, it will change the relative length of the arms, pushing and pulling them back and forth. In essence, LIGO is a celestial earpiece, a giant microphone that listens for the faint symphony of the hidden cosmos.
Like many exotic physical phenomena, gravitational waves originated as theoretical concepts, the products of equations, not sensory experience. Albert Einstein was the first to realize that his general theory of relativity predicted the existence of gravitational waves. He understood that some objects are so massive and so fast moving that they wrench the fabric of spacetime itself, sending tiny swells across it.
How tiny? So tiny that Einstein thought they would never be observed. But in 1974 two astronomers, Russell Hulse and Joseph Taylor, inferred their existence with an ingenious experiment, a close study of an astronomical object called a binary pulsar [see "Gravitational Waves from an Orbiting Pulsar," by J. M. Weisberg et al.; Scientific American, October 1981]. Pulsars are the spinning, flashing cores of long-exploded stars. They spin and flash with astonishing regularity, a quality that endears them to astronomers, who use them as cosmic clocks. In a binary pulsar system, a pulsar and another object (in this case, an ultradense neutron star) orbit each other. Hulse and Taylor realized that if Einstein had relativity right, the spiraling pair would produce gravitational waves that would drain orbital energy from the system, tightening the orbit and speeding it up. The two astronomers plotted out the pulsar's probable path and then watched it for years to see if the tightening orbit showed up in the data. The tightening not only showed up, it matched Hulse and Taylor's predictions perfectly, falling so cleanly on the graph and vindicating Einstein so utterly that in 1993 the two were awarded the Nobel Prize in Physics.
Ross Andersen, whose interview with Nick Bostrom I linked to last week, has a marvelous new essay in Aeon about Bostrom and some of his colleagues and their views on the potential extinction of humanity. This bit of the essay is the most harrowing thing I've read in months:
No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.
'Let's say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,' Dewey told me. 'And let's say the Oracle AI has some goal it wants to achieve. Say you've designed it as a reinforcement learner, and you've put a button on the side of it, and when it gets an engineering problem right, you press the button and that's its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn't think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.'
'One day we might ask it how to cure a rare disease that we haven't beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it's actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it's going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage -- and then it would take that advantage and start doing what it wants to in the world.'
Read the whole thing, even if you have to watch goats yelling like people afterwards, just to cheer yourself back up.
Nick Bostrom, a Swedish-born philosophy professor at Oxford, thinks that we're underestimating the risk of human extinction. The Atlantic's Ross Andersen interviewed Bostrom about his stance.
I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.
Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.
While reading this, I got to thinking that maybe the reason we haven't observed any evidence of sentient extraterrestrial life is that at some point in the technology development timeline just past the "pumping out signals into space" point (where humans are now), a discovery is made that results in the destruction of a species. Something like a nanotech virus that's too fast and lethal to stop. And the same thing happens every single time it's discovered because it's too easy to discover and too powerful to stop.