Quotes 2-2-2015

by Miles Raymer

“Having lost his mother, father, brother, and grandfather, the friends and foes of his youth, his beloved teacher Bernard Kornblum, his city, his history––his home––the usual charge leveled against comic books, that they offered merely an easy escape from reality, seemed to Joe actually to be a powerful argument on their behalf. He had escaped, in his life, from ropes, chains, boxes, bags, and crates, from handcuffs and shackles, from countries and regimes, from the arms of a woman who loved him, from crashed airplanes and an opiate addiction and from an entire frozen continent intent on causing his death. The escape from reality was, he felt––especially right after the war––a worthy challenge. He would remember for the rest of his life a peaceful half hour spent reading a copy of Betty and Veronica that he had found in a service-station rest room: lying down with it under a fir tree, in a sun-slanting forest outside Medford, Oregon, wholly absorbed into that primary-colored world of bad gags, heavy ink lines, Shakespearean farce, and the deep, almost Oriental mystery of the two big-toothed, wasp-waisted goddess-girls, light and dark, entangled forever in the enmity of their friendship. The pain of his loss––though he would never have spoken of it in these terms––was always with him in those days, a cold smooth ball lodged in his chest, just behind the sternum. For that half hour spent in the dappled shade of the Douglas firs, reading Betty and Veronica, the icy ball had melted away without him even noticing. That was magic––not the apparent magic of the silk-hatted card-palmer, or the bold, brute trickery of the escape artist, but the genuine magic of art. It was a mark of how fucked-up and broken was the world––the reality––that had swallowed his home and his family that such a feat of escape, by no means easy to pull off, should remain so universally despised.”

––The Amazing Adventures of Kavalier & Clay, by Michael Chabon, pg. 575-6

 

“The goal ‘Maximize the expectation of the balance of pleasure over pain in the world’ may appear simple. Yet expressing it in computer code would involve, among other things, specifying how to recognize pleasure and pain. Doing this reliably might require solving an array of persistent problems in the philosophy of mind––even just to obtain a correct account expressed in a natural language, an account which would then, somehow, have to be translated into a programming language.

A small error in either the philosophical account or its translation into code could have catastrophic consequences. Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with ‘hedonium’ (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their ‘supervenience bases’ in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources. It is unclear how desirable this would be. Furthermore, if the AI’s criterion for determining whether a physical process generates pleasure is wrong, then the AI’s optimizations might throw out the baby with the bathwater: discarding something which is inessential according to the AI’s criterion yet essential according to the criteria implicit in our human values. The universe then gets filled not with exultingly heaving hedonium but with computational processes that are unconscious and completely worthless––the equivalent of a smiley-face sticker xeroxed trillions upon trillions of times and plastered across the galaxies.”

––Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, pg. 140