스티븐 호킹의 진정 마지막 책이라고 할 <호킹의 빅 퀘스천에 대한 간결한 대답Brief Answers to the Big Questions>은 그의 생전에 계획되어 그의 사후에 출판됐다. 호킹이 남긴 방대한 강연, 인터뷰, 에세이 등을 추려서 편집한 것이다. 호킹은 그의 전문분야가 아닐지라도 커다란 질문big question에 대해 답하기를 즐겨했다. 이 책에는 커다란 질문으로 다음의 10가지가 나열되어 있고, 그 질문에 대한 호킹의 생각이 담겨있다.
1. 신은 있는가?
2. 모든 것은 어떻게 시작되었는가?
3. 우주에 다른 지적인 생명체는 있는가?
4. 우리는 미래를 예측할 수 있는가?
5. 블랙홀 안에는 무엇이 있는가?
6. 시간여행은 가능한가?
7. 우리는 지구에서 살아남을까?
8. 우리는 우주를 식민화 해야 하는가?
9. 인공지능은 우리보다 더 똑똑해질까?
10. 우리는 미래를 어떻게 만들어갈까?
질문에 대한 그의 답은 예, 아니오 중 하나이거나, 둘 다인 것도 있으며, 둘 다 아닌 것도 있다. 내가 이해한 바대로 가장 간단하게 정리하면 이렇다.
1. 신은 있는가? 없다.
2. 모든 것은 어떻게 시작되었는가? 특이점에서 시작(빅뱅).
3. 우주에 다른 지적인 생명체는 있는가? 모른다.
4. 우리는 미래를 예측할 수 있는가? 예측할 수 있다고 해도 실제로는 불가능하다.
5. 블랙홀 안에는 무엇이 있는가? 많은 것.
6. 시간여행은 가능한가? 현재로는 불가능. 아마 미래에도(는) 불가능(가능).
7. 우리는 지구에서 살아남을까? 아마도. 어쨌든 살아남아야 한다.
8. 우리는 우주를 식민화 해야 하는가? 인류의 미래를 위해서 반드시.
9. 인공지능은 우리보다 더 똑똑해질까? 반드시. 우리가 인공지능을 통제할 수 있는지의 여부에 인류의 미래가 달려있다.
10. 우리는 미래를 어떻게 만들어갈까? 과학교육을 통해.
이런 식의 정리는 장님 코끼리 만지기라고 할 수 있다. (과학이 자연을 기술하는 것이 이와 마찬가지라고 할 수 있을까? 문득 든 생각.) 호킹의 생각을 좀 더 알고 싶다면 책을 읽어야 한다. [자연을 좀 더 알고 싶다면 이론에 만족하지 말고 실제로 자연을 겪어봐야 한다(실험?).]
호킹은 그의 신체적 어려움을 극복한 불굴의 의지로 인해 더욱 유명해졌다. (아마 일반인들에게는 아인슈타인만큼 많이 알려진 것 같다.) 그는 낙관주의자였다. 만약 낙관주의자가 아니었다면 견뎌내기 힘들었으리라. 그의 글에는 이런 낙관주의가 넘쳐난다. 그는 웨스트민스터 사원 안, 그의 두 과학 영웅인 아이작 뉴턴과 찰스 다윈 사이에 묻혔다.
If you accept, as I do, that the laws of nature are fixed, then it doesn’t take long to ask: what role is there for God? This is a big part of the contradiction between science and religion, and although my views have made headlines, it is actually an ancient conflict. One could define God as the embodiment of the laws of nature. However, this is not what most people would think of as God. They mean a human-like being, with whom one can have a personal relationship. When you look at the vast size of the universe, and how insignificant and accidental human life is in it, that seems most implausible.
I use the word “God” in an impersonal sense, like Einstein did, for the laws of nature, so knowing the mind of God is knowing the laws of nature. My prediction is that we will know the mind of God by the end of this century. (p.28)
... in 1915 Einstein introduced his revolutionary general theory of relativity. In this, space and time were no longer absolute, no longer a fixed background to events. Instead, they were dynamical quantities that were shaped by the matter and energy in the universe. They were defined only within the universe, so it made no sense to talk of a time before the universe began. It would be like asking for a point south of the South Pole. It is not defined. (p. 44)
What are the prospects that we will discover this complete theory in the next millenium? I would say they were very good, but then I’m an optimist. In 1980 I said I thought there was a 50-50 chance that we would discover a complete unified theory in the next twenty years. We have made some remarkable progress in the period since then, but the final theory seems about the same distance away. Will the Holy Grail of physics be always just beyond our reach? I think not. (pp. 155-156)
The Star Trek vision of the future in which we achieve an advanced but essentially static level may come true in respect of our knowledge of the basic laws that govern the universe. But I don’t think we will ever reach a steady state in the uses we make of these laws. The ultimate theory will place no limit on the complexity of systems that we can produce, and it is in this complexity that I think the most important developments of the next millenium will be. (p. 157)
At some point during our 13.8 billion years of cosmic history, something beautiful happened. This information processing got so intelligent that life forms became conscious. Our universe has now awoken, becoming aware of itself. I regard it a triumph that we, who are ourselves mere stardust, have come to such a detailed understanding of the universe in which we live. (p. 183)
If computers continue to obey Moore’s Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to overtake humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever. (p. 184)
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. We should plan ahead. If a superior alien civilisation sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here, we’ll leave the lights on”? Probably not, but this is more or less what has happened with AI. Little serious research has been devoted to these issues outside a few small non-profit institutes. (p. 188)
The Earth is becoming too small for us. Our physical resources are being drained at an alarming rate... Our population, too, is increasing at an alarming rate. Faced with these figures, it is clear this near-exponential population growth cannot continue into the next millenium.
Another reason to consider colonising another planet is the possibility of nuclear war. There is a theory that says the reason we have not been contacted by extraterrestrials is that when a civilisation reaches our stage of development it becomes unstable and destroys itself. We now have the technological power to destroy every living creature on Earth. As we have seen in recent events in North Korea, this is a sobering and worrying thought. (pp. 204-205)