Quotes 2-16-2015

by Miles Raymer

“‘Basically, kid, what this all means is that life is a lot tougher than it used to be, in the Good Old Days, back before you were born. Things used to be awesome, but now they’re kinda terrifying. To be honest, the future doesn’t look too bright. You were born at a pretty crappy time in history. And it looks like things are only gonna get worse from here on out. Human civilization is in “decline.” Some people even say it’s “collapsing.”

‘You’re probably wondering what’s going to happen to you. That’s easy. The same thing is going to happen to you that has happened to every other human being who has ever lived. You’re going to die. We all die. That’s just how it is.

‘What happens when you die? Well, we’re not completely sure. But the evidence seems to suggests that nothing happens. You’re just dead, your brain stops working, and then you’re not around to ask annoying questions anymore. Those stories you heard? About going to a wonderful place called “heaven” where there is no more pain or death and you live forever in a state of perpetual happiness? Also total bullshit. Just like all that God stuff. There’s no evidence of a heaven and there never was. We made that up too. Wishful thinking. So now you have to live the rest of your life knowing you’re going to die someday and disappear forever.

‘Sorry.'”

––Ready Player One, by Ernest Cline, pg. 17-8

 

“Whatever else they may be, freely willed actions are the actions of agents––creatures who are answerable for what they do. But––the worry runs––the picture of human agency that cognitive neuroscience paints for us can find no honest work for the notion of the agent to do. Cognitive science seems to replace the agent with swirls of neuronal activity that migrate from one location to another. However, if we have lost the agent, then we have also lost free will, for without agents the notion of free agency is nonsense.

I regard this line of thought as representing the most profound version of the decoding challenge to free will. However, to say that it is profound is not to say that it is successful. The objection goes wrong, I think, by assuming an overly ‘reified’ conception of the agent. If the kind of agent for which we are looking is a homunculus––an entity that can function as the ultimate point of origin for autonomous actions––then we would have every reason to doubt whether free will can be retained. But free will doesn’t require that kind of agent. Instead, reference to agents (or ‘selves’) should be understood as a convenient way of capturing the fact that certain actions are grounded in and expressive of a human being’s stable and reflectively endorsed attitudes. We evaluate the force of various considerations, we deliberate, and we decide to commit ourselves to certain courses of action. In thinking about freedom and autonomy we must find room for the agent as an active presence in the formation and execution of intentions, but this ‘active presence’ should not be regarded as a node in the causal chain that might intervene between perception and action, as it were. Rather, the agent is to be found in the functioning of entire networks of intentional states. The kind of freedom worth wanting is not the freedom of a causa sui––a creature whose intentions emerge from nowhere––but is rather the freedom of a creature with the capacity to effectively implement its reflectively endorsed intentions in a dynamically changing world.”

–– “Neural Decoding and Human Freedom,” by Tim Bayne, Moral Psychology, Vol. 4, ed. Walter Sinnott-Armstrong, pg. 180-1