22
Hannah Tobias 5-14-13 Final Paper – CogSci100 Livingston Throughout your life, you will make millions upon millions of choices – what to wear, what type of omelet to order, whether or not to move to Los Angeles, who to spend your life with and what to spend your life doing, etc., etc. Each choice you make is defined by an entirely unique set of circumstances, events, and mental states, yet each of those millions of decisions, trivial or pivotal, can be shunted into one of four discreet categories of choice: slow and high-impact, slow and low-impact, fast and low-impact, or fast and high-impact. Each of these categories houses a somewhat unique arsenal of cognitive, decision-making weaponry. Believe it or not, the choice to rescue a dog from the middle of the road engages a slightly different set of mental functions than does the decision to attend Vassar College, and those functions are different still from what’s

Cogsci Final Paper

Embed Size (px)

Citation preview

Page 1: Cogsci Final Paper

Hannah Tobias 5-14-13

Final Paper – CogSci100 Livingston

Throughout your life, you will make millions upon millions of choices –

what to wear, what type of omelet to order, whether or not to move to Los

Angeles, who to spend your life with and what to spend your life doing, etc., etc.

Each choice you make is defined by an entirely unique set of circumstances,

events, and mental states, yet each of those millions of decisions, trivial or

pivotal, can be shunted into one of four discreet categories of choice: slow and

high-impact, slow and low-impact, fast and low-impact, or fast and high-impact.

Each of these categories houses a somewhat unique arsenal of cognitive,

decision-making weaponry. Believe it or not, the choice to rescue a dog from the

middle of the road engages a slightly different set of mental functions than does

the decision to attend Vassar College, and those functions are different still from

what’s going on in your head when you order a hamburger – though at the most

basic level, all choices are in fact produced from the same basic prediction-

making, goal-modeling machinery of the brain.

As Hawkins states in his critically acclaimed On Intelligence, “[Prediction]

is the primary function of the neocortex, and the foundation of intelligence“ (2004,

pg. 89). The brain is an algorithmically savvy, computationally intricate, parallel

processing predictor. It is hardwired to receive and record inputs as computable

signals, encoding and grounding the real world into trillions of network patterns.

Page 2: Cogsci Final Paper

These patterns become recognizable with repetition and are linked to other

patterns that typically precede or follow them, which ultimately results in a mind-

blowingly complex matrix made of billions of cells able to communally predict

future events by comparing current input patterns to those it has already ‘learned’

– this matrix is our brain. We can make predictions – so what?

It is this seemingly simple capacity that allows us to do all we do: you

reach out a hand to catch a ball, predicting and interrupting its trajectory for the

desired catch. If your prediction is not accurate, you may be surprised or

annoyed as the ball slides past your hand – the real-world inputs entering at the

bottom of your brain’s computational nets don’t match the top-down prediction

signals generated and sent down by higher order, pattern-linking predictors to

meet them. This same prediction-making is described on a higher level as the

brain’s ‘modeling’ capacity by Read Montague in his Your Brain is (Almost)

Perfect. On a neuro-computational level, Hawkins’ networks predict future sets of

incoming sensory inputs, and what Montague describes as ‘modeling’ is an

interesting bigger picture parallel on the level of much more complex pattern

generation: the brain is able to predict, often based on prior experience, the

results of certain choices and creates models of the possible scenarios that might

result. “Predictions allow an organism to evaluate future events before they

actually occur, permit the selection and preparation of behavioral reactions, and

increase the likelihood of approaching or avoiding objects labeled with

motivational values” (Shultz, 1998, p.1). This ability to model possible outcomes

of choices also allows the brain to more clearly make an informed decision based

Page 3: Cogsci Final Paper

on what potential future is most suited to its goals.

For example, in choosing whether or not to cross the street with heavy

traffic flow in order to be on time for work, one might first run through the possible

outcomes: make it to the office on time, or be flattened by a 16-wheeled semi.

These models may come from past personal experience, but many don’t – if you

had been run over by a semi before, you likely wouldn’t be in any condition to

caution yourself against being run over again. So where do our models, or

predictions, of the futures that our choices may result in come from when they do

not come from personal experience? For one, they come from others’

experiences: after seeing someone or something else get run over, our brains

transfer the experience to our own arsenal of memories. We often learn even

more from other’s mistakes than we do from our own. For another, our brain is

exceedingly good at drawing parallels between scenarios (this talent is a function

of the Hawkins’ pattern recognizing networks discussed above). After seeing an

object in motion collide with another object, for example a bowling ball and a

bowling pin, our brain catalogues the results and can apply it to the real life

version: you become the bowling pin and the semi becomes the bowling ball. In

addition to these analogies, others’ experiences, and the first person past, other

predictions may come from past fictional models made in the process of making

different decisions. In the end, wherever the predictions come from, this modeling

capability ultimately allows for ‘smart’ choices to be made – choices that favor

goal-fulfillment.

Page 4: Cogsci Final Paper

When deciding whether or not to cross a dangerous intersection in order

to be on time, what are your goals and how are they prioritized? Though it may

seem clear that ‘being late’ is better than ‘being flattened’, how does the brain

itself make any distinction between the two? The answer is value. “The brain

simulates possible future scenarios, values the fictive outcomes of each

scenario, and uses the valuation to help choose a course of action” (Montague

69). It is important to note that value systems are a very “specialized kind of

cognitive mechanism, a summery mechanism” (Livingston, COGS 100 lecture,

May 13, 2013) that assigns a single number to distill a complex thing, like a

grade on a paper. “The brain starts out life richly pre-equipped with lots of value

functions” (Montague, 2007, p.54) such as the values of basic survival elements

like food and water, but the brain also learns the values of many things through

experience – learning what it should want. Because ‘life’ is innately at the top of

the brain’s value scale and is far more highly valued than the alternative, ‘being

late’, the brain chooses survival, favoring the choice that favors the outcome with

the highest value. The next time you are deciding whether to cross the road,

your brain will run a quick value-probability calculation on the results of your

potential choices and will choose the option that best supports the more highly

valued goal.

This outcome (with the highest ‘value’) is also most likely the option with

the most pleasing neurochemical result. What exactly does ‘pleasing

neurochemical result’ mean? Think about ordering food at a café: you are not

sure what to get, but you know what you like and do not like. You think about

Page 5: Cogsci Final Paper

chocolate muffins and as you do so, your mouth starts to water. You order the

muffin and enjoy its sumptuousness with satisfaction. How did the mere idea of a

muffin turn into a physical response (salivation), and how did that response affect

your decision? Montague’s reward-prediction error theory places dopamine

neurons near the heart of the answer to that question.

“[Dopamine] neurons encode reward prediction error (critic signals) as

bursts and pauses in their production of electrical impulses” (Montague, 2007,

p.106). When the neurons fire excessively, the presence of an unpredicted

reward is indicated; when the dopamine neurons fire at a regular rate, the reward

is just as predicted; and when the neurons fire at a rate lower than the normal

rate or cease firing altogether, the reward experienced is less than what was

initially expected. A dopamine surge indicating a ‘reward’ can be caused by

something as simple as the taste of chocolate, or something as complex as the

idea of having enough money to buy chocolate. Yes, an idea can cause these

neurons to fire, and it is in this way that dopamine neurons assume their more

powerful predictive, goal-orienting, decision-swaying abilities.

The idea of a reward like a muffin – which might consist of a mental

image, a ghostly gustatory sensation, or “the physical appearance of [a muffin –

is] used for predicting the much slower vegetative effects” (Shultz, 1998, p. 3). In

other words, though the ultimate ‘reward’ effect of a food like a muffin is found in

the muffin’s chemical content’s influence on our biology, our dopamine neurons

have learned to pre-emptively fire when the stimuli that consistently predict the

eventual carbohydrate reward– either the sight, the thought of, the smell, etc., –

Page 6: Cogsci Final Paper

are experienced. In fact, “about 75% of dopamine neurons show phasic

activations when animals touch a small morsel of hidden food during exploratory

movements in the absence of other phasic stimuli” (Shultz, 1998, p. 3). In this

way, when you see a muffin, your dopamine neurons may fire in expectation of

the gustatory and digestive implications of eating the muffin, and in so doing

these dopamine neurons entice you to fulfill their prediction by choosing to eat

the muffin. Analogously, ideas about other goals can act as ‘rewards’ in the

brain’s dopamine driven prediction-error system: the idea of getting a good job is

a predictor of a steady income which in turn is a predictor of a steady food source

and comfortable living conditions, two of the body’s evolutionarily programmed

goals. This ‘idea’ of a job hijacks your dopamine system and steers you toward

success, helping you make the choices that will most likely lead to that idea –

and the results of that goal’s realization. However, it is worth noting that

dopamine circuits are no substitute for thinking; after a while dopamine circuits

become acclimated to rewards (after the system learns to ‘predict’ a certain

reward through feedback loops, the dopamine neurons will no longer fire at

heightened levels when that stimuli is encountered – the neurons will adjust and

therefore assume the ‘normal reward’ firing rate), which detracts from potentially

ongoing important goals, nor can dopamine circuits generate alternative choices

to problems (that job is left to your memory and invariant representation recall).

So far we have seen that all decisions are made through prediction and

model generation, whose values (assigned in part by dopamine neurons) with

respect to certain goals are used to select the ‘best choice’. So what happens

Page 7: Cogsci Final Paper

when you make a quick, low-impact decision to bike around the left, as opposed

to the right side, of a puddle in the middle of the sidewalk? It turns out that

though we may not know a whole lot in detail about what goes on, we do know

relatively where it goes on thanks to studies on patients with Parkinson’s

disease. Patients given “dopamine washes” (injections of dopamine) restore

much of their lost capacity to function normally. Before the wash, patients have

emotionless expressions, and shaky, almost paralytic movements, but after the

injection they restore much of their mobility and personality (Montague, 2007,

p.155). This suggests that the extra dopamine has in some way temporarily

repaired their damaged valuation system: “The difference in value between two

behavioral options is ‘read out’ by dopamine fluctuations… but in PD these

fluctuations are so small that they are all about the same size” (Montague, 2007,

p.156). This merely emphasizes the subtlety of dopamine’s role. When PD

patients are given the wash, however, the increased amount of dopamine allows

them to detect smaller discrepancies between the values of different

experiences. Clearly the ability to distinguish between subtle changes in

dopamine concentration is essential to decision-making; without it, a PD patient

freezes up to save energy rather than choosing one of two equally dopamine-

valued actions. With the ability to see different values in states other than your

present one, your brain is able to move forward, taking the steps – around the

puddle in the middle of the sidewalk – to reach those more highly valued goals.

Would there an appreciable difference in the choice you made concerning

the puddle if you had more time to deliberate? Not really: the fact of the matter is,

Page 8: Cogsci Final Paper

that no matter how much time you had to decide which way to walk around the

puddle, you wouldn’t take more than a few seconds to decide before following

through on that decision. Why? Because time efficiency is also a large

component of decision-making. The many goals your brain keeps track of all

have individual values, and if wasting time deciding on ‘how to reach the other

side of the puddle’ goal keeps you from fulfilling your goal of ‘reaching the office

on time,’ your brain will recognize this and override your indecision by stimulating

action. A relatively unimportant action with virtually no time constraints will be

carried out in virtually the same way as an unimportant action with time

constraints due to the fact that time efficiency and the importance of other goals

supersedes the need to ‘get that choice right’.

Because ‘getting it right’ takes time! When faced with a heavily loaded

decision, we often take days to weigh our options, making pros and cons lists

and thinking through possible outcomes. Long-term, high-impact decisions are

also usually less easily facilitated by prior experiences than are low-impact

decisions. With less immediate feedback (unlike short term decisions where the

result is more closely linked to its cause by temporal proximity), it is likely that

each long-term, high-impact decision will require a new, innovative solution. The

making of such a decision, for example choosing one of two very different jobs,

requires all the mental power we can muster. It takes time to dredge up

memories, to examine invariant representations, or to put emotions into words

and put words into lists. The computational speed of our brain is limited, as are

the pattern searching equations we run to look for solutions amidst the billions of

Page 9: Cogsci Final Paper

stored network connections. For this reason, we must deploy all the cognitive

tools at our disposal to tackle far-reaching decisions that must be made, and

each one of these tools is fully loaded with complexities about the functioning of

the choice-making brain. One of these tools, language, plays an extremely

important and interesting role in our decision-making processes.

A research team from the University of Chicago set out to test the

boundaries of this question by asking one hundred and twenty-one American

students a question in English concerning a hypothetical epidemic. Half were

asked, “Should we develop a medicine that saves 200,000 lives, or a medicine

with a 33.3 percent chance of saving 600,000 lives?” while the other half were

asked, “Should we develop a medicine that saves 200,000 lives, or a medicine

with a 66.6 percent chance of saving no lives at all?” (Keim, 2012). Since “people

are, in a nutshell, instinctively risk-averse when considering gain and risk-taking

when faced with loss, even when the essential decision is the same,” (Keim,

2012) the results of the study were logically unsound: those who answered the

question framed in the negative with emphasis on the loss of life chose the less

secure option while those who answered the positively framed question were far

more willing to take a risk. This experiment, based on the work of Nobel Peace

Prize winner Daniel Kahneman, shows that choices made from language-based

questions are dependent on the phraseology of the problem. This indicates that

language’s power to highlight and emphasize certain sides of a scenario has an

incredibly large impact over our decision-making abilities.

Page 10: Cogsci Final Paper

The Chicago team probed a little farther into the effects of language on

choice, testing the results of the same question in the second-language of a set

of new subjects. Astonishingly, or perhaps not so astonishingly, the percentage

of people who chose the safe option stayed consistent, no matter in what light

(positive or negative with respect to loss of life) the question was phrased. This

is, again, indicative of language’s innate power over us that we escape when we

are forced to think about language as an interface between our brains and real

world information. Answering a question posed in anything other than one’s

native tongue requires a good deal of dry dissection – grammatical re-

arrangement, word translation, etc. – which, as it forces more intimate contact

with the material, no doubt helps the subject to better grasp the nature of the

question being asked. “The researchers believe a second language provides a

useful cognitive distance from automatic processes, promoting analytical thought

and reducing unthinking, emotional reaction” (Keim, 2012).

But what kind of role does language play in choices made on problems not

originally framed by language, like our dual-job-offer conundrum? In the case of a

pros and cons list, language provides structure. By taking the ‘unthinking,

emotional reactions’ and reducing them to a series of symbols, one can often

better see the issues at hand. In this way, even a first language can provide a

much-needed degree of separation from the visceral, emotional connotations or

values of a meaty decision. Just as the second language helped the subjects of

U.C.’s study see the logic behind the epidemic problem, so may a first language

provide a ‘useful cognitive distance from automatic thought, promoting analytical

Page 11: Cogsci Final Paper

thought.’ Language is a useful system of organization and value assignment;

unfortunately, we do not always have the time to use it to help us to solve our

problems.

Nor do we always have time to deploy any of our best decision-tackling

tactics. In an emergency, a fast and high-impact situation, the instinctive

reactions with which we are all evolutionarily programmed take over. Sometimes

this reaction is the quintessential fear or shock-induced paralysis that occurs

when the brain cannot process data quickly enough. “The more complex the

cognitive task, the more expansive the neural circuitry needed, and the more

likely that processing time will exceed the minimum. Non‐optimal circumstances,

such as danger, may further slow information processing” (Leach, 2004). If an

individual has never encountered anything like the emergency situation – for

example an individual in a sinking ship – before, there will be minimal patterns

from which to pull appropriate action from; “This will take at least 8‐10 s under

optimal circumstances and much longer under threat… This produces a

cognitively induced paralysis or 'freezing' behavior’ (Leach, 2004).

For all its wonders, your cognitive toolbox – ready at any point to be

whipped out to make incoming decisions with its memory recall, its language

structures, its dopamine value scales, etc., etc. – is a little odd. Is it just a toolbox

floating in mid-brain? No, of course not. You are there, aren’t you? Isn’t it your

toolbox? But what is, ‘you,’ that inescapable invariant representation that orients

your every thought? It’s ‘consciousness,’ that bizarre quality that, along with

qualia, remains the hard problem of Cognitive Science (Crick and Koch, 2003).

Page 12: Cogsci Final Paper

Many of the decisions you make each day are indeed ‘conscious,’ decisions in

which your fictive models are ones you can remember, in which your final choice

can be articulated using language, in which ‘you’ are ‘aware’ of the action you

carry out and why you do so.

Consciousness is not a necessary component of decision-making if

decision-making is defined as, ‘taking one action rather than another to reach a

goal’. Every machine and every living thing does as much. However, making

human-like decisions, conscious choices that balance on the precipice of

emotion and instinct is a different story. To make those types of decisions, an

artificial unit must be embodied, must categorize and process patterns with a

predictive software like ours. An artificial unit like that must operate on

computational, Hawkins-esque networks that learn, grow, and develop. It

depends on your definition of ‘decision-making’, but it is clear that if your

definition is any less than the above, artificial intelligence can most certainly be

made with the capacities to make those kinds of choices. Equipped with

toolboxes stuffed with prediction-action, invariant memory, embodiment, and

feedback loops, these AI’s will develop ‘consciousness’ or something very close

to what many of us understand ‘consciousness’ to be (the definition of

consciousness has yet to be concretely defined) simply as an emergent quality of

an organized information network (Smith, 2011).

Will there ever be an artificially intelligent being make slow and high-

impact, slow and low-impact, fast and low-impact, and fast and high-impact

choices? Perhaps one day far in the future after several miracle discoveries by

Page 13: Cogsci Final Paper

yet unknown names in the CogSci world. Our knowledge and therefore

technology on this front is woefully thin, but for good reason. The Cognitive

Science big bang is just beginning – who knows! In the next century we may see

some decisions being made by units that are not just out of our control, but under

their own!

Works Cited

Page 14: Cogsci Final Paper

Hawkins, Jeff. (2004). On intelligence. New York: Owl Books.

K.R. Livingston, Vassar COGS 100 lecture, May 14th, 2013.

Read Montague, Your Brain is Almost Perfect (New York: Penguin Group, 2007.

Wolfram Schultz, “Predictive Reward Signal of Dopamine Neurons,” J Neurophysiol,

1998, http://jn.physiology.org/content/80/1/1.full.pdf.

Brandon Keim, “Thinking in a Foreign Language Makes Decisions More Rational,”

Wired, April 24th 2012, http://www.wired.com/wiredscience/2012/04/language-

and-bias/.

John Leach, “Why people ‘freeze’ in an emergency: temporal and cognitive constraints

on survival responses,” Aerospace Medical Association, 2004.

Francis Crick & Christof Koch, “A framework for consciousness,” Nature Neuroscience,

2003, http://www.nature.com/neuro/journal/v6/n2/full/nn0203-119.html.

Kerri Smith, “Neurosceience vs philosophy: Taking aim at free will,” Nature, 31 August

2011, http://www.nature.com/news/2011/110831/full/477023a.html.