Quotes 10-31-2014

by Miles Raymer

“‘If you don’t feel temptation, you’re not making moral choices.'”

––Hieroglyph: Stories & Visions for a Better Future, “Periapsis,” by James L. Cambias, pg. 294

 

“As an example of unintended consequences, Oxford University ethicist Nick Bostrum suggests the hypothetical ‘paper clip maximizer.’ In Bostrum’s scenario, a thoughtlessly programmed superintelligence whose programmed goal is to manufacture paper clips does exactly as it is told without regard to human values. It all goes wrong because it sets about ‘transforming first all of earth and then increasing portions of space into paper clip manufacturing facilities.’ Friendly AI would make only as many paper clips as was compatible with human values.

Another tenet of Friendly AI is to avoid dogmatic values. What we consider to be good changes with time, and any AI involved with human well-being will need to stay up to speed. If in its utility function an AI sought to preserve the preferences of most Europeans in 1700 and never upgraded them, in the twenty-first century it might link our happiness and welfare to archaic values like racial inequality and slaveholding, gender inequality, shoes with buckles, and worse. We don’t want to lock specific values into Friendly AI. We want a moving scale that evolves with us.

Yudkowsky has devised a name for the ability to ‘evolve’ norms––Coherent Extrapolated Volition. And AI with CEV could anticipate what we would want. And not only what we would want, but what we would want if we ‘knew more, thought faster, and were more the people we thought we were.’

CEV would be an oracular feature of friendly AI. It would have to derive from us our values as if we were better versions of ourselves, and be democratic about it so that humankind is not tyrannized by the norms of a few.

Does this sound a little starry-eyed? Well, there are good reasons for that. First, I’m giving you a highly summarized account of Friendly AI and CEV, concepts you can read volumes about online. And second, the whole topic of Friendly AI is incomplete and optimistic. It’s unclear whether or not Friendly AI can be expressed in a formal, mathematical sense, and so there may be no way to build it or to integrate it into promising AI architectures. But if we could, what would the future look like?”

––Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, pg. 56-7