62
Inductive arguments are those in which the premises are intended to provide support, but not conclusive evidence, for the conclusion. To use the example we have been using in the book, in deduction we argue that “All fish have gills, tuna are fish, therefore tuna have gills.” In induction we argue that “Tuna, salmon, cod, sharks, perch, trout, and other fish have gills, therefore all fish have gills.”

Inductive Thinking

Embed Size (px)

DESCRIPTION

Inductive Thinking. Inductive arguments are those in which the premises are intended to provide support, but not conclusive evidence, for the conclusion. - PowerPoint PPT Presentation

Citation preview

Inductive arguments are those in which the premises are intended to provide support, but not conclusive evidence, for the conclusion.

To use the example we have been using in the book, in deduction we argue that “All fish have gills, tuna are fish, therefore tuna have gills.” In induction we argue that “Tuna, salmon, cod, sharks, perch, trout, and other fish have gills, therefore all fish have gills.”

To be even more precise, in using deductive arguments we make explicit in the conclusion what is implicit in the premises. In inductive arguments, we extend the premises and make a claim beyond the cases that are given. Induction hazards an educated guess based on strong but not on absolute proof about some general conclusion that can be drawn from the evidence.

However we characterize induction, we can see that it is not nearly as reliable as deduction because the conclusion is never certain.

In the previous example, it is probably true that all fish have gills, but we have not examined all species of fish, so we never know that our claim is true. The same can be said for the statement that the sun will rise every day, which is based on all recorded instances in the past but not on all possible instances.

Because inductive arguments do not guarantee that their conclusions are true, we evaluate them according to the strength of the support they provide for their conclusion.

An inductive argument is strong when its premises provide evidence that its conclusion is more likely true than false. An inductive argument is weak when its premises do not provide evidence that its conclusion is more likely true than false.

Instead of striving for certainty, we have to settle for a high degree of probability. Used properly, induction can lead to extremely reliable generalizations, as science has repeatedly shown. For example, Charles Darwin established the theory of evolution using inductive reasoning.

One of the most basic, most common, and most important kinds of knowledge we seek is knowledge of cause and effect. Why didn’t my alarm clock go off when it was supposed to? Why did I get a “D” on my critical thinking exam? We want to know the cause of what happened. In the absence of a good account, we will often accept a bad one - as in the case of superstition and mythology. Some people have believed that they can appease the gods by sacrificing a virgin. Some people believe that if a black cat crosses their path, bad luck will follow, and so forth.

To bring rain we may not do a rain dance, but we are only half-joking when we say, “Of course it rained; I just washed my car.”

In all of these cases, a false connection has been established between two events such that we assume that one event is responsible for the other when they are actually unrelated.

It can be difficult to recognize genuine causal connections and distinguishing them from mere temporal succession.

In our reasoning we need to separate a necessary train of happenings from an accidental one.

We can say that some events are subsequent, meaning that they just happen to follow, while others are consequent; they occur because of the earlier event. The trick is to differentiate between the two, and to identify a causal connection only when one event compels the another to occur.

We can, for example, justifiably assert that the following causal sequences took place: the water boiled because the temperature was raised to 212° F; every time I let go of the chalk, the chalk falls to the floor.In these cases the sequence was necessary, not accidental; given one event, the other had to happen.

To take an example, one that the philosopher David Hume liked, every time you have seen one billiard ball strike another, it has caused the other to move. So, you assume there is a cause-and-effect relationship there. You have witnesses the same pairing of events over and over again – it is no mere coincidence. But, Hume asks us, when you think about it, what have you really seen? Just the pairing of two events, one billiard ball striking the other and then the other billiard ball moving. You have witnessed what Hume called “constant conjunction.” The two events always happen one before the other – they are “constantly conjoined.” You never see “necessary connection” or “causal power.” Because of Hume, we can’s say, “I see a cause-effect connection”, but only by claiming, “I can prove it.”

To make the same point, the philosopher Bertrand Russell asks you to consider yourself in the position of a chicken on a farm. Every day that you can remember, the farmer wife’s has approached you and then fed you. You have come to associate the two in terms of cause and effect. But then comes the day when the farmer’s wife approaches you and doesn’t feed you. Instead, she wrings your neck. The moral of the story is that we need to be careful in assuming a cause-and-effect relationship between two things.

The nineteenth-century English philosopher John Stuart Mill (11806-1873) considerably refined the process of identifying causal connections. John Stuart Mill began learning Greek at the age of three. By eight, he was reading Plato. He was extremely influential in the development of utilitarian ethics, but also crucial in the establishment of the first women’s rights organization.

Mill specified four “methods” that can be used to recognize cause-effect chains: that of agreement, difference, agreement and difference, and concomitant variations.

The method of agreement is described by Mill as follows:

If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon.

For example, consider an individual doing research on why some students are successful in an especially difficult subject, say, mathematical logic. In reviewing the data, the researcher finds many circumstances in which students are successful in mathematical logic, such as instructors using particular approaches to teaching the subject or assigning particular tests. However, the researcher discovers that in all instances in which students are successful they are highly motivated.

High student motivation is the only condition that is common to all instances of student success in mathematical logic. From this observation, using the method of agreement, the researcher concludes that the necessary condition for student success in mathematical logic is high motivation.

A B C D E

All instances exhibit P (the phenomenon)

C1 C1 C1 C1 C1

C2 C6 C2 C4 C7

C3 C3 C4 C5 C6

Although this method can be useful, if suffers from a major defect: that there is very often more than one common factor. In the example of the students, they may have drank from the same water fountain, been to the same party the night before, been exposed to someone with a contagious disease, and so forth. This having been said, Mill’s methods are a form of inductive reasoning. There was a recent out break of E. coli at a county fair. Health officials were able to determine that water was the source of the deadly E. coli by using causal reasoning like Mill’s.

The method of difference is described by Mill as follows:

If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save for one, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon.

In our previous example about the dining hall, suppose that none of the students became ill except for the one who ate pumpkin pie for dessert. She had eaten the appetizer and the main course just as the other students did who did not become ill.

Prior factors Effecta, c, e, f, h no illness occurreda, d, e, g, i no illness occurredb, d, e, f, h no illness occurredb, c, e, g, j illness occurred

Therefore j is the cause

The problem with this approach is that, just as the areas of agreement can be numerous, so can the differences. Because of the number of variables involved, we can never be sure when we have found the consequential difference. Even though pumpkin pie may have been the cause, it may not have been the cause. There could have been additional variables. For instance, she could have broken up with her boyfriend that day, drank alcohol the night before, and so forth. The possibilities are numerous.

To try and fill the gaps in both methods Mill suggests a third approach called the joint method of agreement and difference. Here we judge as the cause that element which all preceding events have in common (agreement) after factoring out any common elements that did not result in the subsequent event (difference). We are then left with the one common element present only in positive instances, and that is taken as the cause.

Prior factors Effecta, c, e, f, h illness occurreda, d, e, g, h illness occurredb, d, e, f, h illness occurredb, c, e, g, i no illness occurreda, d, e, g, 1 no illness occurreda, d, e, f, 1 no illness occurred

Therefore h is the cause

Both e and h are present in cases where illness occurred, but by extending the number of cases further, e drops out as a possible cause. e is present even when there is no illness, so it cannot be the cause. H, on the other hand, is present only (and always) when illness occurred, so it must be the cause.

So, as in the case of the method of difference, when pumpkin pie appears to be the cause then we can ask if there is anyone who ate pumpkin pie that did not get sick. If we find such persons then we can eliminate pumpkin pie as the cause of the illness.

The last approach, the method of concomitant variations, is usually employed when a continuous flow of events is involved and we cannot control for the negative occurrences. Here we try to establish causation by recognizing a correlation in the way one set of event varies in relation to another. That is, we see a correlation in degree and regularity between two events, such that we infer that the first must be causally related to the second.

Concomitant graph

0

100

200

300

1 2 3 4

Heart attacks

blo

od

pre

ss

ure

Series2

Series1

For example, people have observed that the height of the tide depends upon the phases of the moon. When the moon is full the tide is highest; a half-moon is followed by a medium tide; and a low tide seems to be related to a quarter or a crescent moon. Because of the consistency and predictability of the relation, we can infer a cause-effect link: the larger the moon, the higher the tide.

Other examples are the age of a tree and its thickness; and the darkness of our tan and the length of time we were in the sun. Economists will use this method in declaring that as mortgage rates decline investment in homes increases. Freudians psychologists will argue that people’s freedom varies inversely with their neuroses; the more neurotic they are, the less they are in charge of their lives.

Aside from Mill’s formal methods, one basic way of proving causal connections is to ask whether the second event could have occurred without the first. If it could not, then the first event can be named as a cause. In technical terms this means identifying the first event as a necessary condition for the second., a sine qua non or indispensable prior factor. Consider this example from a Moore and Parker Critical Thinking text:

The presence of oxygen is a necessary condition for combustion.

This tells us that we can’t have combustion without oxygen, or “If we have combustion (C), then we must have oxygen (O).” Notice that the necessary condition becomes the consequent of a conditional: If C then O.

A sufficient condition guarantees whatever it is a sufficient condition for. Being born in the United States is a sufficient condition for U.S. citizenship – that’s all one needs to be a U.S. citizen. Sufficient claims are expressed as the antecedents of conditional claims, so we could say “If John was born in the United States (B), then John is a U.S. citizen (C): If C then P.

You should also notice the connection between “if” and “only if” on the one hand and necessary and sufficient conditions on the other. The word “if,” by itself, introduces a sufficient condition; the phrase “only if” introduces a necessary condition. So, the claim “X is a necessary condition for Y” could be symbolized “if X then Y.”

Some other examples would be: In sports, having a positive attitude is a

necessary condition for winning; you can’t win without it. However, it may not be sufficient. You also need good training, strength, skill, stamina, a mutually supportive team, and so forth.

It is sometimes said that to be happy we need good health. However, good health may be a necessary condition but it is not a sufficient condition for happiness. We would probably be unhappy if we were not healthy, but just being healthy is not enough to make us happy. As for what the sufficient conditions are for happiness, that has been a quest of philosophers and humankind for centuries.

Sometimes conditions are not the same as causes. In the case of a fire, a spark is both a (necessary) condition and a cause, but if I lend a friend my car which he then drives into a tree, injuring himself, my lending him the car did not cause the accident even though it was a necessary condition for it.

A distinction often made among causal connections is between a proximate and a remote cause. A proximate cause is that which immediately triggers an event. It functions as the factor that precipitates some happening. For example, the proximate cause of a person’s death could be heart failure.

A remote cause on the other had, is the background cause that ultimately produces a certain effect; these causes are usually multiple. They stretch backward in time as links in the cause-effect chain, and contribute to the inevitable and final outcome.

For example, the proximate cause of a death might have been heart failure but the remote causes could have been a gunshot wound, preceded by a jealous quarrel.

At a criminal trial the prosecuting attorney will often stress the proximate cause while the defense attorney will draw attention to the remote ones. For example, a prosecutor might emphasize that the accused was caught stealing a toy. The defense attorney might argue that it was Christmas, the person was unemployed, she didn’t have any friends or family, she was to far down on the waiting list for some of the toys for tots type programs, and so forth.

Each attorney’s case seems convincing because each is referring to a different type of cause.

Some causes are certainly main ones and others are peripheral, but rarely do we find one event that can be labeled as the cause.

Imagine that you are a child and that your father enters the living room and asks what caused the large mirror over the fireplace to break. The proximate cause was that the mirror, a very fragile object, was struck with sufficient force by another object of sufficient rigidity. But your father is not interested in the proximate cause of the mirror’s breaking. He is looking for something else.

The second type of cause that we can identify is a remote cause. A remote cause of a given event is part of the chain of events that led to the occurrence of that event. Typically, for any given event, there are many remote causes. For example, the remote cause of the broken mirror might have been a shoe flying through the air. This is an event within the chain of events that led to the mirror’s breaking. But this does not satisfy your father either. So you tell him that if your sister had not let go of the shoe, the mirror would not have broken. You have identified another remote cause, yet it, too, does not satisfy your father.

The nature of the information sought determines how far back in the chain of events one needs to go in seeking a remote cause. In the case of the broken mirror, your father continues to question you and eventually discovers that you were sitting on the fireplace mantel reading aloud your sister’s diary, which she had always kept hidden. Finally, your father has the answer he has been looking for.

1. Distinguishing cause and effect. In the method of concomitant variations, as well as in other methods, it is sometimes hard to determine which factor is the cause and which the effect.

For example, George seems unusually jittery and remarks that he did not sleep well. His wife thinks George’s insomnia (the feature about George in question) was caused by his jitters (the only relevant difference). She may fail to consider the possibility that George’s being jittery was the effect of his poor sleeping rather than the cause.

Do the times create great leaders, or do great leaders create the times?

2. Causation and correlation. Sometimes, two things or events are clearly associated or linked. Where you find X, you will also find Y. A relationship such as this, in which two things are frequently, or even constantly, found together is a correlation. In a correlation, two things share a mutual relationship; where one is found, the other is often, or always, found. By contrast, in the relationship of causation, one thing produces or brings about the other. Sometimes, a correlation is an indicator of a cause-and-effect relationship.

From the text,◦ Chance correlations must be guarded against.

For example, Arizona has a high death rate from lung disease. However, that does not mean the climate is unhealthy, but only that many people with lung disease move to Arizona (for the clean air). In the same way, in Holland the more storks there are, the greater the number of babies. Does that mean storks bring babies, as mother told us? No, it is rather that as the number of buildings grown with the population, more nesting areas are available for storks. Storks do not bring babies, but babies do bring storks.

3. The logical and the psychological. A third problem has to do with our tendency to attribute causation to events that are connected only periodically, not constantly. The prime example is that of gambling. The steady gambler is the steady loser since the odds are always with the house. However, gamblers are rewarded sometimes and that reinforces their belief that they have a winning system (or good luck). A behavioral psychologist tells us that intermittent reinforcement is a very powerful tool.

From a logical perspective, the fact that the gambler usually loses is proof against the gambler’s idea that her system works, but from a psychological viewpoint the occasional win confirms the gambler’s belief. Obviously, it is more realistic to look at this situation from a logical perspective.

Steps for identifying genuine causal relationships from mere temporal sequences. First we must apply Mill’s four methods:

1. Agreement2. Difference3. Agreement and difference4. Concomitant variations

Then we should differentiate between1. Necessary and sufficient conditions2. Proximate and remote causes

Finally, we should be careful to distinguish:

1. Cause from effect2. Causation from correlation3. The logical from the psychological

Similes and metaphors are figures of speech that are basically poetic devices that draw together events, objects, or ideas, which are otherwise dissimilar, in a striking comparison.

Similes, from the Latin, meaning “likeness,” use the terms “as” or “like” to make the comparison explicit, whereas metaphors, from the Greek meaning “transfer,” dispense with the indicator terms and imply the connection by substituting the language of the one for the other.

Whereas similes and metaphors compare things that are essentially different except for one similarity, analogical arguments compare things that are alike in all essential respects and then claimed to be alike in some further respect.

From the Greek, ana logon, “according to a ratio,” analogies declare a relationship between two things, a parallel connection, usually between ideas or a set of ideas.

In mathematics, for example: 5 is to 10 as 10 is to X . X being 20.

Or, up is to down as right is to? Left, because the relationship is one of

opposites. These are analogy questions.

An analogy is a comparison of things based on similarities those things share.

Although analogies are interesting and important for many reasons, including their use in poetry, we shall focus on one: their importance in constructing inductive arguments.

Arguments from analogy claim that certain similarities are evidence that there is another similarity.

Extended beyond mathematics, analogical reasoning has had an extremely wide application.

For instance, physical scientists have argued that the atomic nucleus is like a miniature solar system, so whatever physical forces disrupt the one will disrupt the other.

Just prior to the Revolutionary War some royalists argued that the colonies were like the children of the mother country, and just as children should remain loyal to their parents, the colonies should not revolt against England. On the other hand, the revolutionaries argued that the colonies were like fruit in an arbor, and when the fruit is ripe it is natural that it should drop from the tree.

These examples illustrate the nature of analogical argument, but the last example also shows one of its basic weaknesses. That is, almost anything can be proven by carefully selecting the comparison.

If we want to argue for the blessings of old age we can compare it to the maturing of a fine wine or say that one achieves senior status in the community acquires patience and wisdom, free from the tyranny of passions.

On the other hand, we could show the sadness of old age by comparing it to a house that is decrepit and crumbling, a pitiful ruin dimply reflecting its former dignity.

The English theologian William Paley (1743-1805) presented one of the best known analogical arguments. Paley tried to support the view of St. Thomas Aquinas that the world exhibits evidence of a purposeful design and therefore proves the existence of an intelligent designer, that is, God.

Paley did this by comparing the world to the mechanism of a watch. If we were on a deserted island and found a watch ticking away in perfect order, we would assume that a watchmaker had produced the watch. The odds of all the random parts coming together and forming a functioning watch by pure dumb luck seems unlikely. In the same way, it is unlikely that just dumb luck and a big bang could create a world such as this that is well-organized and functional.

However, we could also compare the world to an organism rather than a mechanism, one with biological parts that can become diseased; with systems, vital organs, and limbs that develop and degenerate; and with energy and matter at the core, not mind or spirit. The blind watchmaker.

In an inductive generalization, we generalize from a sample of a class or population to the entire class or population.

In an analogical argument, we “generalize” from a sample of a class or population to another member of the class or population.

1. The two cases must be alike in all essential respects, and the greater the relevant similarities the more probable the argument.

For example:◦ Jim and Tim are both burly and play football.◦ Jim also wrestles.◦ So, Tim must also wrestle.• This is obviously a weak analogy. It would

be made stronger if it was noted that they are best friends, rarely do anything apart, attend a college that gives scholarships only to athletes who play more than one sport, and so forth.

2. The greater the number of cases compared, the stronger the probability of the conclusion.

For example: Jim’s Buick leaks oil. Therefore, Tim’s Buick will leak oil, also.

This case is not enough to make a fair statement. If we tested 5,000 Buick cars and all of them leaked oil, then we would have a stronger case.

3. The greater the dissimilarity of the cases used as the base of the analogy, the higher the probability of the conclusion.

Example in the book: If we say that a company is like a football team in that they are both organizations of individuals devoted to the achievement of a common goal, and just as teamwork is necessary in winning football so teamwork is essential to business success.

If the characteristics applied to high school teams, as well as college teams, professional and amateur, and so forth, that is stronger evidence than citing just one football team.

That is to say, if all subsets exhibit the same characteristics plus the factor of teamwork, then the argument that business (which is similar to them) should do likewise and becomes more powerful.

If all three rules are followed, the likelihood of the analogy being correct is increased considerably, although we can never be certain of our conclusion.

Many of the arguments used by lawyers in the United States and Canada to support a trial are analogical arguments. The reason is that the legal systems of these countries were derived many years ago from the English system, and an essential feature of the English system is its dependence on precedent. According to the requirement of precedent, similar cases must be decided similarly.

Consider a “law” that we are all familiar with, the First Amendment to the U.S. Constitution, which provides for freedom of speech and religious expression. Suppose that you decide, in reliance on the First Amendment, to pass out religious pamphlets on a downtown street corner. Suppose further that most of the people your hand your pamphlets to merely glance at them and then throw them on the street and that the gathering litter makes the area look like a garbage dump.

To prevent the litter, the police tell you that you can hand out your pamphlets only in the vicinity of trash cans. You object that such a restriction violates your First Amendment rights, and you take the issue to court.

In presenting your case, your lawyer will argue that the case is analogous to a number of other cases where the state attempted to limit not the content of religious expression, but the time, place, and manner of its expression. Your lawyer will attempt to show that your case is analogous to cases in which the government failed to prove that the restriction was so tailored.

As in law, arguments from analogy are also useful in deciding moral questions. Find examples of arguments from analogy in the Moral Reasoning handout.

Arguments from analogy are found in many areas of study and have many practical applications. Once again, let’s consider law:

American law has its roots in English common law, so legal decisions are often made on the basis of precedence. For example, in deciding whether or not the free speech guaranteed by the First Amendment applies to cyberspace communications, a judge would be expected to appeal to earlier and analogous free speech cases.

In deciding whether another case is analogous, we must apply our rules to test the strength of analogous arguments:

The two cases must be alike in all respects, and the greater the number of similarities, the more probable the argument.

Are there a good number of relevant similarities, and few, if any, relevant dissimilarities? Is the conclusion of the judicial ruling properly specific?

Arguments from analogy are often effective in matters of ethics. One strategy used in moral reasoning is to argue that a controversial issue is analogous to one that is not controversial. In her article “A Defense of Abortion,” Judith Jarvis Thompson argues in favor of the morality of abortion. Using a creative scenario, Thomson argues that a person would have no moral obligation to stay connected to a famous violinist who was linked to he kidneys without her knowledge or consent. She then argues by analogy that a woman similarly has no moral duty to carry her pregnancy to term. There are some similarities here. There are also dissimilarities. The question is, how relevant are they? Does the analogy work?

From Moore and Parker: One common strategy for establishing the truth of a claim is showing that its contradictory implies something false, absurd, or contradictory. This strategy, called indirect proof, is based on the same idea as remarks like this: “If Phillips is conservative, then I’m the King of England.” Obviously, this is just a way of saying that Phillips is not conservative, because it is clear that I am not the King of England.

If we want to argue that a claim is true by using indirect proof, we begin with its contradictory. To argue either for P or for not-P, we begin with the other one and try to show that it implies a false claim.

For example, if we wanted to prove that your critical thinking instructor is not wealthy, we would start by assuming the opposite, that is, your critical thinking instructor is wealthy. This can be shown to imply that she can buy Dodge Vipers, mansions, designer clothes, and so forth. Because this is all obviously ridiculous, we have proven that, sadly, your critical thinking instructor is not wealthy.

This pattern of reasoning is sometimes called reductio ad absurdum (reducing to an absurdity, or RAA, for short), because it involves showing that a claim implies a false, absurd, or contradictory result. Once again, the strategy is this:

To prove P,Assume not-P.Show that a false, absurd, or contradictory result follows from not-P.Conclude that not-P must be false.Conclude that P must be true.

In the case of reducing analogies to an absurdity, we need to show that the analogy has many dissimilarities, so that to assume similarities between the two things might be ridiculous.