48
September 28, 2008 1 Chapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System (CS) and not linguistic performance; yet it is the informants' acceptability judgments on a given sentence (often under a specified interpretation) that are considered as crucial data for or against hypotheses about the CS. This is the inherent problem that we must grapple with in making generative grammar an empirical science and it gives rise to the concern in (). () How should we proceed in order to ensure progress toward the goal of discovering the properties of the CS? More specific concerns stemming from () are such as those in (). () a. What qualifies as data for research concerned with the properties of the CS? b. How could we evaluate our hypotheses about the CS? This chapter is concerned with the question in (a). The answer we suggest is repeatable phenomena. We maintain that what is provisionally identified as a repeatable phenomenon is a generalization that we can reasonably attribute to some property of the CS; conversely, an alleged generalization that has not been shown to constitute a repeatable phenomenon does not yet qualify as data that that can be reasonably regarded as a reflection of properties of the CS. (b) will not be discussed in any depth until chapter 3. What follows in this chapter is intended to be a preliminary discussion for what will be presented in chapter 3, where we will discuss more in depth the model of judgment making by the informant, how the theory of the CS is embedded in it, and its various consequences. Let us restricting our attention to cases where the availability of a specific interpretation is crucially considered. Given the assumption that the CS-based interpretations are based on LF (representations), we might thus maintain that evidence for or against a theory of the CS can be reliably extracted from performance data (i.e., informants' linguistic judgments) only if we have some kind of theory about (). 2 1 This document does not contain the extra materials in Appendix in the "Licensed Generalizations"as of April 10, 2008. 2 That is to say, one might consider that the relation between a theory of the CS and a theory of judgment making is analogous to that between a theory in physics and a theory of the relevant observation device, e.g., the relation between a Newtonian document.doc 1/48

6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

  • Upload
    lybao

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

September 28, 20081

Chapter TwoRepeatable Phenomena

Introdction

The primary object of inquiry in generative grammar is the Computational System (CS) and not linguistic performance; yet it is the informants' acceptability judgments on a given sentence (often under a specified interpretation) that are considered as crucial data for or against hypotheses about the CS. This is the inherent problem that we must grapple with in making generative grammar an empirical science and it gives rise to the concern in ().

() How should we proceed in order to ensure progress toward the goal of discovering the properties of the CS?

More specific concerns stemming from () are such as those in ().

() a. What qualifies as data for research concerned with the properties of the CS?b. How could we evaluate our hypotheses about the CS?

This chapter is concerned with the question in (a). The answer we suggest is repeatable phenomena. We maintain that what is provisionally identified as a repeatable phenomenon is a generalization that we can reasonably attribute to some property of the CS; conversely, an alleged generalization that has not been shown to constitute a repeatable phenomenon does not yet qualify as data that that can be reasonably regarded as a reflection of properties of the CS. (b) will not be discussed in any depth until chapter 3. What follows in this chapter is intended to be a preliminary discussion for what will be presented in chapter 3, where we will discuss more in depth the model of judgment making by the informant, how the theory of the CS is embedded in it, and its various consequences.

Let us restricting our attention to cases where the availability of a specific interpretation is crucially considered. Given the assumption that the CS-based interpretations are based on LF (representations), we might thus maintain that evidence for or against a theory of the CS can be reliably extracted from performance data (i.e., informants' linguistic judgments) only if we have some kind of theory about ().2

() How the informants 'get to' the 'intended LF' on the basis of the sentence presented to them.

That is to say, what goes on when the informants make their linguistic judgment on a given example under interpretation must involve reference to the CS.3 A reasonable assumption, suggested in recent works by A. Ueyama, is that, when presented with a phonetic string and the 'intended interpretation' , the informant 'tries to' come up with numeration that is likely to result in both a PF representation corresponding to and an LF representation in which the necessary condition(s) is/are satisfied for , and checks whether or not the LF representation thus obtained does indeed satisfies the necessary condition(s).4, 5 I will adopt this assumption and proceed to illustrate repeatable phenomena as what would most likely qualify as data for research concerned with the

1 This document does not contain the extra materials in Appendix in the "Licensed Generalizations"as of April 10, 2008.2 That is to say, one might consider that the relation between a theory of the CS and a theory of judgment making is analogous to that between a theory in physics and a theory of the relevant observation device, e.g., the relation between a Newtonian theory of motion and law of gravitation on the one hand and a theory of optics, on the other, in the context of extracting evidence for the former on the basis of the observation by the use of the telescope.3 In other words, it is the crucial assumption made here that the informants' linguistic intuitions could reveal properties of the CS only if what transpires when the informants make their judgment makes reference to the CS.4 It will be argued in chapter 3 that this is a consequence of accepting the model of the CS, put forth in Chomsky 1993, and adopted here, and the commitment to make our hypotheses about the properties of the CS empirically testable, along with the recognition that a primary source of data for the evaluation of the hypotheses about the CS is the informants' linguistic intuitions.5 It is assumed that an LF representation gets mapped to an SR (Semantic Representation) by mechanical application of some mappting rules.

document.doc1/33

Page 2: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

properties of the CS. What will be discussed below are (i) the locality of anaphor binding in English, (ii) the alleged local anaphors in Japanese, and (iii) the availability of bound variable anaphora interpretations in Japanese.

Repeatable Phenomena

IntroductionConsider a universal statement in (a) and a language-specific statement in (b).6

() a. An element marked (in the mental Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.7

b. Reflexives in English (e.g., himself, herself, myself, etc.) are specified in the mental Lexicon as [+A]; and so are reciprocals in English (e.g., each other).

Given (), and given the assumption that () necessarily corresponds to a structure in which A and B are co-arguments at LF, we predict that a sentence that conforms to the schematic form in () should be judged unacceptable, regardless of the choice of lexical items used in the sentence and regardless of any additional elements one might provide in the 'unspecified parts' of the schema, as long as DP2 is marked [+A].8

() A Verb B

() *Schema1

DP1 Verb DP2[+A]where DP1=/=DP29

What is not specified in the schema can be freely altered; e.g., the choice of substantive lexical items in actual sentences conforming to a schema should not be 'constrained' unless that is part of the specification of the schema. 10

Hence there should be numerous phonetic strings 'conforming to' the schema in () and they should all be unacceptable under the interpretation that the 'reference/value' of DP2 is not the same as that of DP1. Let us refer to such a schema *Schema and examples that conform to it *Examples. For ease of presentation, we will often refer to a *Example conforming to a *Schema as a *Example of a *Schema, There should be no numeration for any *Example of the *Schema in () that would result in an LF representation in which the necessary condition(s) under discussion is/are satisfied; see (a). If the hypotheses in () are on the right track, the informant judgments that the *Exmaple of () are unacceptable should therefore be robust. () is one such example.

() A *Example of the *Schema in ():John recommended himself.where John =/= himself

It is not sufficient, however, to obtain robust informant judgments on a *Schema, more specifically on *Examples of the *Schema. It is also necessary to demonstrate that there are examples that differ minimally from the *Examples with respect to the crucial structural or lexical property and are judged significantly more acceptable than the *Examples. Only with the presence of such examples, could we be hopeful and assert that the status of the *Example in question is indeed due to the failure of the condition(s) being satisfied for the relevant interpretation to arise. In other words, if there is no such example, the status of the *Example might well be due to a factor

6 We leave aside exactly how the property in (a) is to be derived in the theory.7 In this chapter, unlike in the subsequent chapters, I do not distinguish an element A in sentence from the LF object that corresponds to A in the LF representation corresponding to .8 It is assumed in this work that the co-argumenthood relevant here can be expressed in theoretical terms in one way or another while admitting that how to do so is not a trivial matter.9 Let us proceed informally and understand that "A=B" and "A=/=B" indicate that the 'reference/value' of A is or is not the same as that of B, respectively.10 Similarly, the addition of optional elements such as an adverbial should not make a difference unless otherwise specified.

Chapter 2: document.doc2/33

Page 3: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

independent of the CS.11 Let us refer to such examples as okExamples and refer to a schema of which they are instances as an okSchema.12 Two such okSchemas corresponding to the *Schema in () are given in ().

() a. okSchema1-113

DP1 V DP2[+A]where DP1=DP2

b. okSchema1-2

DP1 V DP2[−A]where DP1=/=DP2

(a) and (b) are instantiations of (a) and (b), respectively.

() a. okExample(s) of okSchema in (a):John recommended himself.where John=himself

b. okExample(s) of okSchema in (b):John recommended him.where John=/=him

John and himself are co-arguments in John recommended himself, which is the surface form in () as well as (a). In accordance with (), the indicated interpretation in (a) is not 'blocked' while that in () is 'blocked'; hence the surface form (in ()/(a)) has to be interpreted as indicated in (a). Given an independent assumption that pronouns such as him are not marked [+A], the indicated interpretation in (b) is not 'blocked' by (); (b) is thus acceptable under the indicated interpretation.14

We can, on the basis of (), construt a number of *Schemas, in addition to (), one of which is given in ().

() *Schema2

*DP1 V that DP2 V DP3[+A]where DP2 =/= DP3 and DP1=DP3

The okSchemas corresponding to () are given in ().

() a. okSchema2-1

DP1 V that DP2 V DP3[+A]where DP2=DP3

b. okSchema2-2

DP1 V that DP2 V DP3[-A]where DP2 =/= DP3 and DP1=DP3

The *Examples in () correspond to the *Schema in ().

() *Examples2-n15

a. John thinks Mary loves himself.where Mary =/= himself and John=himself

b. John thought that Mary had recommended himself.

11 After all, sentence can be found unacceptable under a particular interpretation for a variety of reasons even if there is numeration corresponding to such that LF() would satisfy the necessary condition for . For example, processing difficulty of some sort could make all or most of the relevant examples unacceptable. Chapter 3 provides further discussion.12 As in the case of "*Examples of a *Schema," we will sometimes refer to an okExample conforming to an okSchema as an okExample of an okSchema.13 okSchemam-n stands for an okSchema that corresponds to *Schemam." The n in okSchemam-n serves the purpose of uniquely identifying a particular okSchema (among many others) that corresponds to *Schemam.14 Why (b) tends not to allow the "John=him" reading is a matter independent of ().15 *Examplem-n stands for a *Example that corresponds to *Schemam," and the n in *Examplem-n serves the purpose of uniquely identifying a particular *Example (among many others) that corresponds to *Schemam.

Chapter 2: document.doc3/33

Page 4: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

where Mary =/= himself and John=himselfc. Mary, who had firmly believed that Chomsky would recommend herself, was shocked to death when she

found out that Chomsky had recommended Bill instead.where Chomsky =/= herself and Mary=herself

The okExamples in () and () correspond to the okSchemas in (a) and (b), respectively.

() okExamples2-1-n16

a. John thinks Mary loves herself.where Mary=herself

b. John thought that Mary had recommended herself.where Mary=herself

c. Mary, who had firmly believed that Chomsky would recommend himself, was shocked to death when she found out that Chomsky recommended Bill instead.where Chomsky=himself

() okExamples2-2-n a. John thinks Mary loves him.

where Mary =/= him and John=himb. John thought that Mary had recommended him.

where Mary =/= him and John=himc. Mary, who had firmly believed that Chomsky would recommend her, was shocked to death when she

found out that Chomsky recommended Bill instead.where Chomsky =/= her and Mary=her

As in the case of the discussion of ()-(), the same considerations as above apply here. The statements in (), repeated here, can be regarded as being valid only if (i) the informant judgments converge on the unacceptable status of examples such as () and (ii) the informants find examples like () and () significantly more acceptable than those such as ().

() a. An element marked (in the mental Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.17

b. Reflexives in English (e.g., himself, herself, myself, etc.) are specified in the mental Lexicon as [+A]; and so are reciprocals in English (e.g., each other).

What makes us hopeful that the unacceptability of the *Examples is indeed due to the property of the CS as hypothesized is a significantly better acceptability judgment on the okExamples, minimally different from the corresponding *Example, along with the robust judgment that the *Examples are unacceptable. In summary, () can be regarded as being valid only if (a) and (b) are demonstrated.18

() a. The informant judgments converge on the unacceptable status of *Examples of *Schemas such as () and ().b. The informants find okExamples of okSchemas such as () and () significantly more acceptable than

*Examples of *Schemas such as () and ().

When we have obtained what is described in (), we shall say that we have obtained a repeatable phenomenon.

16 okExamplel-m-n stands for an okExample that corresponds to okSchemal-m," and the n in okExamplel-m-n uniquely identifies a particular okExample (among many others) that corresponds to okSchemam-n.17 In this chapter, unlike in the subsequent chapters, I do not distinguish an element A in sentence from the LF object that corresponds to A in the LF representation corresponding to .18 In principle, the crucial contrast is between total unacceptability and the lack thereof, as will be discussed in some depth in the subsequent chapters.

Chapter 2: document.doc4/33

Page 5: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

*Schemas and okSchemas19

The content of a claim regarding a *Schema is as in ().

() *Example of *Schema is unacceptable under interpretation .20

In order for () to be a valid statement, it must be the case that (i) interprtation arises only on the basis of some property P at LF and (ii) no *Example of *Schema corresponds to an LF representation with property P. That is to say, no matter how patient the informants might be and no matter how many possible numerations corresponding to *Example of they might try, it must be impossible for the CS to yield an LF representation corresponding to that has property P. No matter what alteration might be made to the unspecified parts of , the necessary condition(s) for interpretation should remain to be not satisfied in an LF representation corresponding to , thereby resulting in the clear unacceptability of under . The informant judgments on *Examples of *Schema should not be affected by what is allowed to vary in ; every *Example of should be judged unacceptable under interpretation , no matter how many times and no matter how many different instantiations of *Schema are checked. That is the prediction the researchers are committing themselves to if their proposal has a consequence that includes *Schema . Supppse that the informant judgments on *Example of *Schema are not robust and not much repeatability obtains on the predicted judgments. Suppose, for example, that some or even many of the informants find some or many such examples acceptable under the specified interpretation. Such a result should be taken very seriously since, according to the hypotheses in question, none of the *Examples should correspond to a numeration that could result in an LF representation that could underlie the interpretation in question.21

The empirical content of a claim regarding an okSchema, on the other hand, is as in ().

() okExample of okSchema can be acceptable under interpretation .

If okExample is constructed so that the only possible numeration corresponding to necessarily results in an LF representation with property P, will be judged acceptable under . Given that the unspecified parts of a Schema (whether it is an okSchemas or a *Schema) can be 'filled out' freely, it is also possible that okExample of can be constructed in such a way that there is a numeration corresponding to such that it would result in an LF representation without property P. Such could happen as the result of particular lexical choices or some additional elements, not specified in the okSchemas, in the actual okExample. If the informant 'goes to' such a numeration, would be judged unacceptable under . Furthermore, even when an LF prepresentation with P obtained, the mere complexity of (the interpretation of) entire might make the informant judge to be unacceptable. Notice that the unacceptability of under in such cases would be independent of whether LF( compatible with obtains correspoinding to That is why the empirical content of a claim regarding an okSchem is understood to be as in () rather than as in ().

() okExample of okSchema is acceptable under interpretation .

Recall that the impossibility claimed in () is due to the claimed failure of the necessary condition(s) to be met for interpretation in any of the possible LFs that may correspond to sentence . This guarantees that, as long as the claim is valid, the informants should find to be unacceptable under , and their judgments should be robust. The claim about an okExamples of an okSchemas in (), on the other hand, states that there can be an LF corresponding

19 The significance of the asymmetry between *Schemas and okSchemas, its conceptual basis and its methodological and empirical consequences cannot be addressed satisfactorily until chapter 3, where we consider what significance we want to assign to the disconfirmation and confirmation of our predictions. We will, however, not be able to do so until chapter 3. The conceptual discussion at a somewhat intuititve and informal level in this chapter should be sufficient as a basis for the ensuing empirical discussion until chapter 3.20 Some remarks are perhaps in order, to avoid unnecessary confusion. What is meant by 'interpretation ' is not the interpretation of the entire sentence it is instead meant to be part of the interpretation of . More in particular, 'interpretation ' as will be discussed in this work is an interpretation involving two linguistic expressions such as an anaphoric relation holding between two expressions. A more accurate way of expressing what is intended might therefore be something like "the acceptability of sentence under an interpretation that includes ." But I will avoid using such a cumbersome way of phrasing it in the ensuing discussion; see chapter 3 for further discussion.21 This paragraph looks 'denser' than the others in this document but I cannot seem to fix it...

Chapter 2: document.doc5/33

Page 6: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

to in which the necessary condition(s) for would be satisfied; thus, there is no guarantee that the informants uniformly find acceptable under since (i) the informant may or may not 'go to' the 'intended numeration' and (ii) the informant may or may not find the interpretation of the entire natural enough to accept . Such considerations lead us to conclude that the informant judgments cannot be expected to be nearly as robust with an okSchema-based claim like () as with a *Schema-based claim like (). Judgmental fluctuation and variation should therefore be assigned a radically different status, depending upon whether we are considering a *Schema or an okSchema. To put it somewhat loosely, what the CS allows may or may not be judged acceptable while what the CS disallows should always be judged unacceptable. This is the fundamental asymmetry between *Schemas and okSchemas, which will play a crucial role in our methodological proposal as well as empirical discussion in this book, as will be further articulated in chapter 3 and further illustrated in chapter 4.

Across-speaker repeatability and within-speaker repeatabilityRepeatability, when addressed in generative grammar, seems to usually concern the degrees of uniformity of

the judgments among different speakers on some (set(s) of) examples. Let us refer to it as across-speaker repeatability. I would like to maintain that it is useful to also consider within-speaker repeatability, including across-occasion repeatability and across-example repeatability. Across-occasion repeatability can be 'measured' by checking the judgments by the same speaker on the same *Example 'on different occasions'. If the same speaker's judgments on the same *Example differ from one occasion to another, e.g., if the same speaker finds a particular *Example unacceptable on one day but acceptable on another day, we cannot feel confident that the speaker's reported unacceptability of such an example is due to the CS-related property as hypothesized. I maintain that across-speaker repeatability is significant only if within-speaker repeatability obtains for each of the speakers under discussion. Most of the ensuing discussion in this chapter, however, addresses only across-speaker repeatability; see Ueyama to appear for relevant discussion.

Repeatable phenomena, hypotheses, and progress in generative grammarWhen the informant judgments on a number of *Examples conforming to a *Schema are as predicted and

robust and when furthermore the okExamples of the okSchemas corresponding to the *Schema are judged to be significantly more acceptable than the *Examples, we shall say that we have obtained a repeatable phenomenon. There is reason to be hopeful that the generalization in question is a reflection of some CS-related property. We maintain that an alleged generalization that does not form a repeatable phenomenon have not (yet) attained the status of data in generative grammar; for there is no strong reason to suspect that it reflects the properties of the CS in a way that would likely lead us to a discovery of the nature of the CS.22

Although a repeatable phenomenon may in principle obtain without a theoretical characterization, expressing it in theoretical terms is crucial for the purpose of making a prediction beyond the repeatable phenomenon at hand; see the discussion in chapter 3: section 7. A prediction is made on the basis of two or more hypotheses.23 Some of the hypotheses are theory-internal and not empirically testable, at least in any direct way; but they contribute to obtaining predictions by providing a general framework within which the other, more directly testable, hypotheses are formulated. The theory-internal assumptions we adopt include the following.24

() a. The Computational System (CS) exists at the center of the language faculty.b. The mental Lexicon exists.c. The CS is an algorithm.d. (i) Input to the CS is a set of items taken from the mental Lexicon.

(ii) Output of the CS is a pair of PF and LF representations.e. The CS includes an operation Merge, which combines two items and forms a larger unit.

What has prompted the research reported in this book is the concern for ensuring progress in generative grammar. One way to try to ensure progress is by adopting the research heuristic that, except for initial assumptions such as (), every hypothesis about the properties of the CS or about the properties of items in the mental Lexicon that

22 It must be noted that the status of a repeatable phenomenon as such is necessarily provisional, and a repeatable phenomenon should be understood as a generalization that has been provisionally qualified as a reflection of some property of the CS.23 The point is discussed as early as in Duhem 1905: chapter 6 in the context of 'physical theory'.24 One might regard that they correspond to the hard core in Lakatos 1970, 1978.

Chapter 2: document.doc6/33

Page 7: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

the CS makes reference to must be accompanied by, i.e., backed up by, a repeatable phenomenon.25, 26 In a similar vein, we might characterize progress in generative grammar as follows.

() a. The more repeatable phenomena have been identified, the better.27

b. The more repeatable phenomena are expressed within the theory, i.e., in terms of the postulated concepts and relations, the better.

c. The fewer theoretical concepts and relations are needed for expressing the repeatable phenomena, the better.

Hypotheses that are backed up by a repeatable phenomenon are propositions that express the repeatable phenomena in terms of the postulated concepts and relations. They are crucially responsible for making predictions beyond the repeatable phenomena that have already been established, subjecting our proposal to empirical tests in a most interesting way.28

SummaryOur initial concern was (), repeated here.

() How should we proceed in order to ensure progress toward the goal of discovering the properties of the CS?

Stemming from () is the question in (a), also repeated here.

() a. What qualifies as data for research concerned with the properties of the CS?

This chaper is concerned with the question in (a) and the answer we suggest is a set of informant judgments as indicated in ()

() The informant judgments on a number of *Examples conforming to a *Schema that are as predicted and robust—the judgment that they are clearly unacceptable under interpretation —accompanied by the informant judgments that the okExamples of the corresponding okSchemas are significantly more acceptable than the *Examples under interpretation .

When we obtain such a set of informant judgments, we say that we have obtained a repeatable phenomenon. In the next section, we will try to illustrate and clarify what is meant by repeatable phenomena on the basis of some specific empirical materials in Japanese.29

25 In addition to propositions such as (), which are all purely theoretical and are not directly related to speaker intuitions, there may be some initial assumptions related to speaker intuitions that we have to accept without empirical justification, and among them are statements having to do with the thematic hierarchy, as a condition on the order of composition at SR, such as the "theme" gets composed with a predicate before the "agent" does. [This footnote may be addressing an issue that is beyond the scope of this chapter in terms its current plan of exposition, and it may be deleted later.]26 One might object that this is too restrictive a heuristic, pointing to the (massive) literature (e.g., Lakatos 1978, Kuhn 1962) on what is actually done in more mature sciences. I would like to maintain that something like this is indeed needed in a field where the researchers' task includes the identification of the relevant data. The issue will be further addressed in chapter 3; see also section 4 below.27 How to 'count' the number of repeatable phenomena is not a simple matter at least for the following reason: Two or more repeatable phenomena understood as such at one point may turn out to be different instantiations of one single repeatable phenomenon. We should therefore not be overly concerned with what is literally expressed in (a) and (b) in regard to the 'number of repeatable phenomena'. I hope that the intended point is clear enough, and if not, we hope it will be clarified, at least to some extent, in the rest of this and the subsequent chapters.28 The relevant notions will be articulated further in chapters 3 and 5.29 In chapter 3, we will address the conceptual bases of repeatable phenemena and will also discuss how to evaluate hypotheses about the CS.

Chapter 2: document.doc7/33

Page 8: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

Some Illustration

IntroductionIn section , we have briefly discussed some paradigms in English in relation to the hypotheses in (), repeated

here.

() a. An element marked (in the mental Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

b. Reflexives in English (e.g., himself, herself, myself, etc.) are specified in the mental Lexicon as [+A]; and so are reciprocals in English (e.g., each other).

English paradigms such as those discussed in that section have been among the crucial empirical bases for theory construction in generative grammar since its earliest days, and the informant judgments on these examples seem to be quite robust although there are issues that still need to be explored and clarified. The *Schema in () and its corresponding okSchemas in () thus constitute a repeatable phenomenon, and so do the *Schema in () and its corresponding okSchemas in ().30 The hypotheses in () are thus backed up by a repeatable phenomenon. In this section, we will provide further illustration of the notion of repeatable phenemena, this time drawing from Japanese.

In the recent generative grammatical works, it is widely assumed that otagai in Japanese is a reciprocal anaphor corresponding to English each other hence a local anaphor.31 Similarly, zibunzisin has been assumed to be a local anaphor, corresponding to English reflexives, and the assumption has also been used crucially in theoretical discussion much as in the case of the above-mentioned assumption about otagai.32 In section , we will observe that these assumptions (i.e., hypotheses) are not backed up by a repeatable phenomenon, despite their wide 'acceptance' in the field and a great deal of reference to it in theoretical discussion.33 After discussing these alleged generalizations, we will turn to a few repeatable phenomena in Japanese in section .

Hypotheses not backed up by a repeatable phenomenon in Japanese

Zibunzisin Let us first discuss a hypothesis about zibunzisin. The assumption that it is a local anaphor amounts to a

language-specific hypothesis in (); see (b).

() Zibunzisin is specified as [+A] in the mental Lexicon of the speaker of Japanese.

As discussed above, the content of the claim that sentence is not acceptable under interpretation must be something like () in sofar as the claim is concerned with a discovery of the properties of the CS.

() a. Interpretation is possible only on the basis of some property P at LF.b. There cannot be numeration corresponding to senetence such that the CS yield an LF representation

based on with property P.

Combined with the universal statement in (a), repeated here, () must therefore have the consequence in ().34

(a) An element E marked (in the Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a

30 Informant judgments on some of those examples are reported in () below.31 The distribution of otagai and "its antecedent," as analyzed under this assumption, has been used in various works as a probe into the nature of Scrambling, the applicability of Binding Theory to Japanese, the nature of reciprocity in natural language, the status of the subject(s) in Japanese, etc. Cf. Yang 1994, Kitagawa 1986, Nishigauchi 1992, Saito 1992, Miyagawa 1997 and many others. The more recent works that make crucial reference to this assumption include Saito 2003.32 Some references shall be given here, including Kurata 198x, Katada 198x, …, and Saito 2003.33 Hence, if we are to accept the heuristic to be suggested in chapter 3: section 7, they should not be allowed to be used in deriving further theoretical or empirical consequences in research that is intended to discover the properties of the CS.34 See the qualifications given in note Error: Reference source not found.

Chapter 2: document.doc8/33

Page 9: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

() Zibunzisin is a legitimate LF object, or can receive its interpretation, only if there is another element E that appears at LF where zibunzisin and E occupy co-argument positions of a single predicate such that the 'reference/value' of zibunzisin can be determined on the basis of that of E.

Experimental designWhile it is not always straightforward to determine whether two DPs/NPs are co-arguments in a given structure

in Japanese, it seems uncontroversial that A and C in examples of the form in () as illustrated in () are not co-arguments.

() A-{wa/ga} B-ga C-{ni/o} V1-{ru/ta} to V2-{ru/ta}

() a. A-wa/-ga B-ga C-ni horeteiru to omoikondeita.A-TOP/NOM B-NOM C-DAT is:in:love:with that believed35

'A thought/believed that B was in love with C.'b. A-wa/-ga B-ga C-o suisensita to omotteita.

A-TOP/NOM B-NOM C-ACC recommended that thought

'A thought that B had recommended C.'

In () B is intended to be the 'subject' of the embedded predicate V1 and A the 'subject' of the matrix predicate V2. We now have the *Schema in ().

() *Schema1

NP1-ga NP2-ga zibunzisin-{o/ni} V-{ru/ta} to V-{ru/ta}36

NP1=zibunzisin

Note that NP1 and zibunzisin in () are not co-arguments. An okSchema corresponding to () is given in ().

() okSchema1-1

NP1-ga zubunzisin-o/ni V-ru/ta

In (), on the other hand, NP1 and zibunzisin are co-arguments. It does not seem possible that, upon seeing/hearing such a *Example of the *Schema in (), the informant 'goes

to' numeration that could result in LF() where NP1 and zibunzisin would be co-arguments, especially when it is made clear that what is denoted by NP2 in a *Example of the *Schema in () is the individual who was engaged in the action denoted by the embedded verb, thereby making it clear in effect that NP2 is the subject of the embedded clause. Upon seeking/hearing an okExample of the okSchemas in (), on the other hand, it is not impossible—and in fact perhaps quite likely—that the informant will 'go to' numeration that would result in LF() in which NP1 and zibunzisin are co-arguments.37

*Examples of the *Schema in () are thus predicted to be unacceptable under the intended interpretation. Given in () are two such *Examples.

() *Example1-n

a. John-wa Mary-ga zibun-zisin-ni horeteiru to omoikonde ita.John-TOP Mary-NOM zibun-zisin-DAT is:in:love that believed

'John believed that Mary liked self.'

35 I use 'NOM', 'DAT', etc. in the gloss, following the common practice in the field, without making any theoretical commitment concerning a proper analysis of those 'case markers'..36 The choice of "NP" instead of "DP" in () and other schemas (and examples) in Japanese to be given below is inconsequential in this paper.37 One may wonder how it could be possible for the informant, upon seeing/hearing an okExample of the okSchemas in (), to 'go to' a numeration that would result in an LF in which NP1 and zibunzisin are not co-arguments. The conceptual aspects and the empirical aspects of the relevant issues will be addressed further in chapter 3 and chapter 4, respectively.

Chapter 2: document.doc9/33

Page 10: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

b. John-wa Mary-ga zibun-zisin-o suisensita to bakari omotteita.John-TOP Mary-NOM zibun-zisin-ACC recommended that only thought

'John firmly believed that Mary recommended self.'

Recall that any *Example of a *Schema is predicted to be unacceptable under interpretation , no matter how many times and no matter how many different instantiations of the*Schema are checked.

A variant of () is the *Schema in ().

() *Schema1'

[[NP-2-ga zibunzisin-o V-ru/ta to V-ta] NP1]-wa …NP1=zibunzisin

A *Example of the *Schema in () is given in ().

() *Example1'-n

Chomsky-ga zibun-zisin-o suisensuru to omoikonde ita John-wa,Chomsky-NOM zibun-zisin-ACC will:recommend that believed John-TOP

Chomsky-ga Bill-o suisensita to sitte gakuzen-to-sitaChomsky-NOM Bill-ACC recommended that know be:shocked

'John, who had firmly believed that Chomsky would recommend self, was shocked to death when he found out that Chomsky had recommended Bill instead.'

Corresponding to the Japanese *Examples in () and () are the English *Examples in (), repeated below as () and ().

() (=(a, b))a. *John thinks Mary loves himself.b. *John thought that Mary had recommended himself.

() (=(c))*Mary, who had firmly believed that Chomsky would recommend herself, was shocked to death when she found out that Chomsky recommended Bill instead.

Note that the Japanese and the English *Examples are almost direct translations of one another, and hence the *Examples in () and () in Japanese are predicted to be as unacceptable as those in English in () and ().

ResultsIn the experiments whose results are reported below, the informants are asked, mostly on-line, how acceptable

or unacceptable they find each example on the survey sheet/page under a specified interpretation. Each example is judged on the 'scale' given in (), and the five choices in () get computed as in (), with "2" corresponding to "Bad" and "+2" to "Good"—although the numeric values are not assigned to each of the five circles (i.e., the five radio buttons) on the survey sheet/page.

() Bad < ===== > Good o o o o o

() 2, 1, 0, +1, +2

The experiments are still at their preliminary stage in terms of their general 'design'. They have not been designed with the kind of care that would likely have been taken for a standard psycho-linguistics experiment; neither have their results been 'analyzed statistically' in ways they would for a standard psycho-linguistics experiment.38 In light of the considerations given in the preceding pages, however, their results must nonetheless be

38 With the special status given to *Examples in our theoretical characterization of the relevance of the speaker intuitions, the general design of our experiments would likely differ from that of the typical psycho-linguistic experiment currently being adopted. I should also note that although the "2 to +2" scale is used in the experiments reported below, it might be more effective

Chapter 2: document.doc10/33

Page 11: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

significant; the mean scores on *Examples should be sufficiently revealing in regard to the validity of the generalization under discussion if they are close to "2" and accompanied by the mean scores on the corresponding okExamples that are significantly higher than "2."

The results of the experiment are quite striking. While the English examples in () and () are indeed judged unacceptable fairly uniformly, the judgments on the Japanese examples in () and () vary considerably and, furthermore, they are judged acceptable by many informants. The informant judgments on these examples are summarized in ().39, 40, 41

()Number of informants who accepted it42

Mean Score Standard Deviation

(a) 18 out of 34 +0.41 1.46(b) 17 out of 33 +0.18 1.49() 27 out of 35 +0.86 1.44(a) 0 out of 14 −1.87 0.34(b) 1 out of 14 −1.43 0.98() 1 out of 14 −1.21 1.01

The robust native informant judgments on () and () in English seems to suggest that the CS has a property that has a consequence in (a), repeated here.

(a) An element E marked (in the Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

Given the assumption we adopt that the properties of the CS, including (a), are universal, the informant judgments on () and () in Japanese leads us to seek one of the two possibilities given in ().

() a. () is to be abandoned.b. () applies only to some speakers of Japanese.

For convenience, () is repeated here.

() Zibunzisin is specified as [+A] in the mental Lexicon of the speaker of Japanese.

One may pursue (b) if some speakers have robust judgments on the *Examples. Four speakers among the approximately 30 informants seemed to fit this category at least on one occasion. The judgments of two of these four speakers however did not have across-occasion repeatability; they found the *Examples acceptable on a different occasion (a few months after the first occasion). The judgments of the other two remained on the next occasion as "−2" or "−1" on the *Examples. These two speakers, however, detected 'locality effects' even with John-zisin, thus

and in fact more appropriate, given what has been stated in the preceding pages, to ask the informant a Yes/No question as to whether a given *Example is totally unacceptable under a specified interpretation. Furthermore, a better designed experiment should also reflect the recognized significance of 'preliminary experiments' in which across-example and across-occasion repeatability have been checked 'prior to' testing across-speaker repeatability. We plan to present and discuss in future work a more articulated version of our syntactic experiment.39 The 'figures' reported below are as of March 10, 2007 or thereabout. Since more informants has participated in the experiments since then, the figures to be reported in a later version of this paper will be somewhat different from what is given here. 40 [Note for myself: See also CFJ [30652]. Maybe I should just mention the difference between non-linguists and linguists, suppressing a more interesting difference between GGES linguists and non-GGES linguists, to make the exposition simpler.) Alternatively, I may not mention anything beyond what is already given in the main text.]41 [Note for myself] It would be useful and interesting to compare the informant judgments reported here with what has been reported in the past literature on a similar paradigm in English. 42 The score of "+1" or "+2" is taken in the context of this particular exposition as "the example being judged to be acceptable."

Chapter 2: document.doc11/33

Page 12: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

casting doubt over the thesis that 'locality effects' they are detecting on the *Examples with zibun-zisin is indeed due to the property of zibun-zisin.43 It thus seems at this point that (a) is a much more viable option to pursue than (b).44

OtagaiThe assumption, i.e., hypothesis, that otagai is a local anaphor can be stated as ().

() Otagai is specified as [+A] in the mental Lexicon of the speaker of Japanese.

Like (), () is a language-specific hypothesis. With (), combined with (a), repeated again, we have the consequence in ().

(a) An element E marked (in the Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

() Otagai is a legitimate LF object, or can receive its interpretation, only if there is another element E that appears at LF where otagai and E occupy co-argument positions of a single predicate such that the 'reference/value' of otagai can be determined on the basis of that of E.

Experimental designWe obtain the *Schemas in () and the okSchema in ().

() a. *Schemas1

Otagai-ga Vb. *Schemas2

NP1ga otagai-o/ni VNP1=/=otagai

() okSchema1-1 (also okSchema2-1)NP1-ga otagai-o/ni VNP1=otagai

In accordance with (), we predict the *Examples conforming to () to be as hopeless as *Examples in English such as ().

() a. *Now that each other is here, we can find out what really happened.b. *The warm spring breeze made each other feel good.

Japanese examples such as () are, however, judged acceptable by almost every one of the informants who took part in the experiment.

() a. Haru-no atatakana kaze-ga otagai-o totemo siawase-na kimoti-ni sita.Spring-GEN warm wind-NOM otagai-ACC very happy feeling-DAT made

43 One may pursue the possibility that –zisin is responsible for making John-zisin (and for that matter any proper noun + zisin) a local anaphor. Although we have not conducted follow-up experiments on the speakers in question in regard to such a hypothesis—and we can do so in the future—,we doubt that those speakers treat John-zisin and other forms like that as a local anaphor. I.e., we would be quite surprised if they find examples like John-zisin-ga kita 'John himself came' to be unacceptable. They should be unacceptable if John-zisin is a local anaphor.44 It seems clear that in order for (b) to be a viable option, a great deal would have to be demonstrated. First, it must be shown that across-example and across-occasion repeatability obtain for the group of speakers in question for whom () holds. Furthermore, we would have to ask what evidence was available to those speakers whose mental Lexicon includes () but not to the other type of speakers. We should also ask what consequences we would expect and what testable predictions we would be able to make, beyond the locality issue concerning zibunzisin, on the basis of the alleged difference between the two distinct groups of speakers. The prospect of getting a satisfactory answer seems (to me to be) rather bleak although one should certainly try to be open-minded.

Chapter 2: document.doc12/33

Page 13: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

'The warm spring wind made otagai (=them) feel very happy.'

b. Otagai-ga manzoku nara, boku-wa monku-o iwanai tumorida.otagai-NOM satisfied if I-TOP complaint-ACC say:not plan copula

'If otagai (=both of them) are satisfied, I will not raise issues.'

One might object that the 'reciprocal reading' is characteristically missing in examples like () and propose to modify () as in ().

() Otagai with a 'reciprocal interpretation' is specified as [+A] in the mental Lexicon of the speaker of Japanese.

Such a modification would be accompanied by the concomitant proposal that there are at least two distinct lexical items in the mental Lexicon of the Japanese speakers both of which are phonetically realized as otagai and only one of them is specified as [+A]. Under (), the acceptability of () would no longer be problematic since the schemas in () do not necessarily qualify as a *Schemas since not every instance of otagai is 'reciprocal' (hence local anaphor) otagai.

When we consider the *Schemas in (), however, we will see that the modified hypothesis still is not backed up by a repeatable phenomenon.

() a. *Schema3

[Otagai-no N]-ga NP-o/ni VNP=otagai, with the reciprocal interpretation.

b. *Schema4

[Otagai-no N]-ga [NP1-ga NP2-o/ni V-ta to] V-taNP2=otagai, with the reciprocal interpretation.

Conforming to (a) and (b) are *Examples in (a) and (b), respectively.

() a. (=Saito 2003: (8b), which is marked as "*?." I have added -ga mondai nandesu here.)Otagai-no sensei-ga karera-o hihansita koto-ga mondai nandesu. otagai-GEN teacher-NOM they-ACC criticized fact-NOM problem copula

'The problem is the fact that [each other's teachers] criticized them.'

b. (=Saito 2003: (11a), which is marked as "*.")Otagai-no sensei-ga [Tanaka-ga karera-o hihansita to] itta (koto)otagai-GEN teacher-NOM Tanaka-NOM they--ACC criticized that said fact

'[Each other's teachers] said that Tanaka criticized them.'

These examples are judged fairly to perfectly acceptable under the 'reciprocal interpretation' by about half of the informants who participaged in the experiment. Minor adjustment as in () seems to improve the examples and make them even more readily acceptable under the 'reciprocal interpretation'.

() a. Otagai-no koibito-ga John to Bill-ni iiyotta koto-ga konkai-no ziken-no kikkake desu.otagai-GEN lover-NOM John and Bill-DAT tried:to:seduce fact-NOM this:time-GEN affair-GEN trigger copula

'The trigger of the affair this time was the fact that [each other's lovers] tried to seduce John and Bill.'

b. Otagai-no sensei-ga [Chomsky-ga karera-o hometeiru to] omoikondeita ndesu. otagai-GEN teacher-NOM Chomsky-NOM they-ACC is:praising that believed colula

'[Each other's teacher] believed that Chomsky was praising them.'

One might object that otagai in () and () does not occur in a typical argument position. Given that 'exempt anaphors' in the terms of Pollard and Sag 1992 have been known to occur in such positions, one may thus further modify (a), making it applicable only to the [+A]-marked element in an argument position, as in ().45

45 [Provide a few 'exempt anaphor' examples from Pollard and Sag 1992.]

Chapter 2: document.doc13/33

Page 14: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

() (A modified version of (a); the italicized portion has been added.)An element E marked (in the Lexicon) as [+A], if it occurs in an argument position, is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

With such a modified version of (a), as given in (), the speaker judgments on () and () cease to be a problem for () since the schemas in () are no longer *Schemas because otagai in () is not in a typical argument position. Such a modification, however, cannot quite save (). Consider the *Schema in ().

() *Schema4

NP1-ga [NP-2-ga otagai-o/ni V-ta to] V-taNP2=/=otagai

Most anyone, if not everyone, who distinguishes between argument positions and non-argument positions would agree that in () otagai appears in an argument position. Conforming to the *Schema in () is the *Example in (a), and its slightly altered form is given in (b).

() (Cf. ().)(=Hoji 1997/2006: (7))a. [John to Bill]1-wa [CP Mary-ga otagai-ni horeteiru to] omoikonde-i-ta

[John and Bill]-TOP [Mary-NOM otagai-DAT is:in:love that] believed

'[each of John and Bill] believed that Mary was in love with the other.''[each of John and Bill]1 believed that Mary was in love with him1.'

b. [John to Bill]1-wa [Chomsky-ga naze otagai-o suisensita no ka] [John and Bill]-TOP [Chomsky-NOM why otagai-ACC recommended Q]

wakaranakattadid not understand

'[each of John and Bill] did not understand why Chomsky had recommended the other.''[each of John and Bill]1 had no idea why Chomsky had recommended him1.''[John and Bill]1 had no idea why Chomsky had recommended them1'

The *Examples in () are predicted to be unacceptable at least under the 'reciprocal interpretation' for otagai; i.e., they should be as unacceptable as the *Examples in () in English, with the reciprocal interpretation for otagai.

() *Examplesa. John and Mary think Bill loves each other.b. John and Mary thought Bill had recommended each other for that position.c. John and Mary had no idea why Chomsky had recommended each other.

Yet, the examples in () are judged to be acceptable by the majority of the informants, as we will see directly.

ResultsThe informant judgments on the examples in (), (), () and () are summarized in ().

()Number of informants46 who accepted it47

Mean Score Standard Deviation

46 [For my own use: No non-linguist informants are included in this CFJ as of 3/11/2007. I will try to conduct an experiment that would include non-linguists as well. I do not expect a significant difference between linguists and non-linguists, in light of the informal surveys I have done in the past, prior to the 'introduction' of CFJs.]47 As before, the score of "+1" or "+2" is taken in the context of this particular exposition as the example being judged to be acceptable.

Chapter 2: document.doc14/33

Page 15: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

(a) 15 out of 18 +1.22 1.13(b) 15 out of 18 +1.06 1.35(a) 12 out of 18 +0.89 1.15(b) 7 out of 18 −0.22 1.47(a) 16 out of 19 +1.42 0.88(b) 9 out of 18 +0.17 1.54(a) 16 out of 18 +1.56 1.12(b) 18 out of 18 +1.89 0.31

The results shown in () clearly disconfirm the prediction made by the combination of the universal statement in (a) or its modified version in () and the language-specific statement in () (even as it is modified as in (). (a), (), (), and () are repeated here.

(a) An element E marked (in the Lexicon) as [+A] is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

() (A modified version of (a); the italicized portion has been added.)An element E marked (in the Lexicon) as [+A], if it occurs in an argument position, is a legitimate LF object, or can receive its interpretation, only if there is another element E' that appears at LF where E and E' occupy co-argument positions of a single predicate such that the 'reference/value' of E can be determined on the basis of that of E'.

() Otagai is specified as [+A] in the mental Lexicon of the speaker of Japanese.

() Otagai with a 'reciprocal interpretation' is specified as [+A] in the mental Lexicon of the speaker of Japanese.

Since there is strong evidence in support of the validity of some version of (a) (see the last three rows of (), for example) what is wrong here must be () (or ()).48 We are thus led to reject () and its modified version in ()

Repeatable Phenomena in JapaneseGiven the results reported in sections , one might suggest that it is not appropriate to expect the same degree of

robustness of speaker judgments in Japanese as in English, observing that speaker judgments in Japanese have been known to fluctuate much more widely than in languages like English.49 The results to be reported below, however, show the futility of such an attempt to save () (the hypohesis that zibunzisin is specified as [+A]) and () (the hypothesis that otagai is specified as [+A]).50

The *Schema with an a-NP as the intended dependent term in bound variable anaphora, to be discussed below

48 As discussed in some depth in Hoji 1997/2006, when some locality-like property is detected in certain examples with otagai, similar effects are also detected in regard to what appears to be the relationship between the empty possessor of a kinship term and 'its antecedent', and furthermore, such effects can be made to disappear to a large extent, if not totally, by pragmatic adjustment (which can be achieved by the choice of lexical items) without altering the structural properties.49 To the extent that the observation is valid, one should ask why there is such a difference between Japanese and other 'more-well-behaving' languages like English. In light of what is discussed and proposed in this chapter, and to be further discussed in the subsequent chapters, one possible interpretation of the observation is that what fails to qualify as a repeatable phenemenon is discussed as a valid generalization more often in Japanese than in languages like English. That would raise a further question why. The answer, I would like to suggest, is ultimately related to the presence and the absence of what is referred to in Fukui 1986 as 'active functional categories' (or what we might call 'formal agreement features'). If we focus on the phenomena that appear be of the same formal nature—though not involving 'active functional categories', by hypothesis—we do seem to obtain a high degree of within-speaker as well as across-speaker repeatability in Japanese as well, provided that we have taken a sufficient care and have paid close enough attention in designing an experiment so as to control the complications that could arise in Japanese but not in languages like English, in regard to (i) the presence of the so-called zero pronouns, (ii) the presence of the major subject construction, etc., as will be discussed in some depth in this book.50 One might question the desirability of keeping (a) or its variant in the CS if the mental Lexicon of speakers of some language(s) do not contain the relevant feature that (a) refers to, as pointed out by Hiroki Narita (p.c. March 2007). The issue is related to exactly how (a) is to be derived in the theory, and I am not going to puruse it in this work.

Chapter 2: document.doc15/33

Page 16: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

in , gets robust judgments and the mean score of −1.83 or lower (by 18 speakers). Similarly the *Schema on the 'WCO example', to be discussed in , gets the mean score of −1.71 (by 28 speakers) if we use appropriate binder-bindee pairs. Such mean scores are fairly close to those on the examples of the *Schema in the anaphor binding paradigms in English (see ()), suggesting that it is in fact possible to attain a high degree of repeatability in Japanese as well.

So-NPs vs. A-NPs

Background: the demonstratives in Japanese51

Japanese has three non-interrogative demonstrative prefixes ko-, so-, a-, as exemplified in ().52

() a. ko-no hito 'this person'b. so-no hito 'that person'c. a-no hito 'that person'

Let us refer to NPs such as () as ko/so/a-NPs. Ko/so/a-NPs can be used either in the context of (a) or (b), much as in the case of this NP and that NP in English.

() a. where the object being referred to is visible in the speech location53

b. where the object being referred to is not visible in the speech location

Let us call their uses in the contexts of (a) and (b) their deictic use and non-deictic use, respectively.54 Ko/so/a-NPs are most often characterized in regard to their deictic uses. () is one of the standard descriptions,

which is based on Matsushita 1978: 233-235, originally published in 1930; cf. Hoji et al. 2003: note 4.

() The standard characterization of the deictic uses of ko/so/a-NPs:a. A ko-NP is used for referring to something near the speaker.b. A so-NP is used for referring to something closer to the hearer. c. An a-NP is used for referring to something at a distance from both the speaker and the hearer.

One influential characterization of the non-deictic uses of so/a-NPs is ().

() Kuno's (1973: 290) characterization of the non-deictic uses of so/a-NPs (slightly adapted):

51 The materials in this and the subsequent subsections are based on Hoji et al 2003, which the readers are referred to for more details and the references.52 A more exhaustive paradigm of Japanese demonstratives is given in (i).(i)

Ko-So-A-ko-re 'this thing'ko-tira 'this way'ko-tti 'this way'

ko-ko 'this place'ko-itu 'this guy'so-re

so-tiraso-ttiso-ko

so-itua-rea-tiraa-tti

a-sokoa-itu

53 Replacing "visible" with "perceptible" would broaden the empirical coverage of (a) to cases involving noise, smell, and so on, as discussed in Kinsui 2000, for example, and make the distinction between the two uses of demonstratives descriptively more adequate.54 The discussion of the non-deictic uses of a ko/so/a-NP has often focused on examples in which the NP in question is understood to be related to another NP, and for this reason, the term anaphoric use has sometimes been employed in the literature, instead of non-deictic use; cf. Kuno 1973: ch. 24, for example. Throughout the paper, we will use non-deictic rather than anaphoric.

Chapter 2: document.doc16/33

Page 17: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

a. A so-NP is used for referring to something that is not known personally to either the speaker or the hearer or has not been a shared experience between them.

b. An a-NP is used for referring to something (at a distance either in time or space) that the speaker knows both s/he and the hearer know personally or have experience in.55

So-NPs vs. a-NPsIndependently of () and (), the generalization in () has been noted in works such as Hoji 1991 among others.

() A so-NP can be 'bound' by a quantificational NP, while an a-NP cannot.

Ueyama (1998) advances a theory of anaphoric relations and NP types, in which so-NPs and a-NPs are formally distinguished, providing a means to express the generalization in () in theoretical terms. In order to make the ensuing discussion concrete, let us adopt Ueyama's (1998) theory of anaphoric relations, while noting that what is crucial here is its empirical consequences, rather than its particular technical execution, as they relate to a repeatable phenomenon, to be discussed below. Ueyama's theory assumes the following three types of individual-denoting NPs.

() a. D-indexed NPs b. Non-indexed NPs c. I-indexed NPs

The distinction crucial in the present discussion is between D-indexed NPs on the one hand and non-indexed and I-indexed NPs on the other. A D-indexed NP is inherently referential and hence does not require a linguistic antecedent, while non-indexed NPs and I-indexed NPs require a linguistic antecedent. Let us record the distinction in ().

() a. D-indexed NPs do not require a linguistic antecedent.b. Non-indexed and I-indexed NPs require a linguistic antecedent.

Ueyama 1998 argues extensively for the validity of (a) and (b), exclude from discussion the deictic cases (i.e., the cases in which the target object is visible at the scene of utterance) and the cases in which the a/so-NP is not used to refer to an individual.

() a. A-NPs are D-indexed.b. So-NPs are either I-indexed or non-indexed.

According to Ueyama 1998, D-indexed NPs are the NPs which are to be understood in connection with an individual known to the speaker by direct experience, and the relevant connection is established independently of other NPs. Two NPs are said to stand in the relation of co-D-indexation if they carry the same D-index, and co-D-indexation is one of the bases for so-called 'coreference'.

As illustrated in () and (), an a-NP need not have a linguistic antecedent but its referent should be known to the speaker by direct experience.

() (Situation: The detective is looking for a man. He somehow believes that the man should be hiding in a certain room. He breaks into the room and asks the people there.)[A-itu/#So-itu]-wa do-ko-da? that-guy-TOP which-place-COPULA

'Where is [he]?' (based on Ueyama 1998: section 4.2 (10)&(20))

() (Situation: A wife told her husband on the phone that someone had called him. He has no idea who the person is. He asks her.)[So-itu/#A-itu]-wa nante itteta? that-guy-TOP what said

'What did [he] say?' (based on Ueyama 1998: section 4.2 (16)&(23))

55 The descriptive statements in () are not totally unlike Matsushita's (1930/1978: 234); see Hoji et al. 2003: note 5.

Chapter 2: document.doc17/33

Page 18: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

As illustrated in () above, a so-NP, on the other hand, cannot independently refer to an individual (when the object is not visible at the scene) even if the object is known to the speaker by direct experience. As illustrated in () above, if there is a linguistic antecedent, however, a so-NP can refer to an individual that the speaker does not know at all. The insight in Kuroda 1979, Takubo 1984, Takubo & Kinsui 1996, 1997 concerning the fundamental property of a-NPs and so-NPs can thus be expressed in the terms of Ueyama 1998 as in ().

() a. A-NPs must be D-indexed.b. So-NPs cannot be D-indexed.

Turning to (), repeated here, let us consider the examples in () as its illustration.

() A so-NP can be 'bound' by a quantificational NP, while an a-NP cannot.

() a. Toyota-sae-ga [{so-ko/*a-soko}-no ko-gaisya]-o suisensita.Toyota-even-NOM that-place-GEN child-company-ACC recommended

'Even Toyota recommended [its subsidiary].'b. Do-no zidoosya-gaisya-ga [{so-ko/*a-soko}-no ko-gaisya]-o

which-GEN automobile-company-NOM that-place-GEN child-company-ACC

suisensita no?recommended COMP

'Which automobile company recommended [its subsidiary]?'c. Do-no zidoosya-gaisya-ga [{so-no/*a-no} zidoosya-gaisya-no

which-GEN automobile-company-NOM that-GEN automobile-company-GEN

ko-gaisya]-o suisensita no?child-company-ACC recommended COMP

'Which automobile company recommended [that automobile company's subsidiary]?'d. (based on Ueyama 1998: ch.5 (80))

[Hon-o hiraita hito]-wa minna {so-re/*a-re}-o kaw-anakerebanaranai.book-ACC opened person-TOP all that-thing-ACC buy-must

'[Everyone who has opened a book] must buy it.'

The relevant observations are summarized in ().

() a. A-NPs cannot give rise to a covariant interpretation.b. So-NPs can give rise to a covariant interpretation.

Given that a D-indexed NP is to be understood 'as referring to' an individual known to the speaker by direct experience, (a) follows directly. (b) is also expected if we assume that a necessary condition for an NP to give rise to a covariant interpretation is the absence of a D-index.

To summarize, according to Ueyama 1998, a D-indexed NP is strictly 'referential' and it has to be understood in connection with a specific individual known to the speaker, hence it cannot give rise to a covariant interpretation , and (), repeated above, is derived from (), repeated here.56

() a. A-NPs must be D-indexed.b. So-NPs cannot be D-indexed.

Experimental designWe have the following *Schemas and okSchemas, where cm stands for a case marker such -ni and -o.

() a. *Schema1

QP-ga [ … a-re … V-ta N]-cm VBVA(QP, a-re)

56 See Hoji et al. 2003 for discussion regarding the deictic use of so-NPs.

Chapter 2: document.doc18/33

Page 19: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

b. *Schema2

QP-ni [ …a-re … V-ta N]-ga VBVA(QP, a-re)

() a. okSchema1-1

QP-ga [ … so-re … V-ta N]-cm VBVA (QP, so-re)

b. okSchema1-2

QP-ni [ … so-re … V-ta N]-ga VBVA (QP, so-re)

Six pairs of examples have been given in an on-line experiment, corresponding to (b) and (b); see below, and the informants are asked to indicate for each example whether she can interpret it as "on each tree under discussion, the name of the student who planted it is engraved." The instructions contain the statement that each example should be understood as a continuation of a remark like ().

() Ko-no kookoo-wa so-ko-ga syoyuusiteiru koodaina syokubutuen-de yuumeida. this-GEN highschool-TOP that-place-NOM posess enormous arboretum-for famous

So-ko-ni wa sotugyoosei-ga nokositeitta musuuno hana ya ki-ga uwatteiru.that-place-in-TOP graduates-NOM left:behind numerous flower and tree-NOM grow

'This high school is famous for its enormous arboretum. It has numerous flowers and trees that their graduates have planted.'

Provided in () and () are two *Examples and two okExamples corresponding to the *Schema1-2 in (b) and the okSchema1-2 (b), respectively.

() *Examples1-2-n

a. Do-no ki-ni-mo a-re-o ueta sotugyoosei-no namae-ga kizamareteiru.which-GEN tree-on-MO that-thing-ACC planted graduate-GEN name-NOM be:engraved

'On every tree is engraved the name of the graduate who has planted it.'

b. 90% izyoo-no ki-ni a-re-o ueta sotugyoosei-no namae-ga kizamareteiru rasii.90% or:more-GEN tree-on that-thing-ACC planted graduate-GEN name-NOM be:engraved seem

'It seems that on each of 90% or more trees is engraved the name of the graduate who has planted it.'

() okExamples1-2-n

a. Do-no ki-ni-mo so-re-o ueta sotugyoosei-no namae-ga kizamareteiru.which-GEN tree-on-MO that-thing-ACC planted graduate-GEN name-NOM be:engraved

'On every tree is engraved the name of the graduate who has planted it.'

b. 90% izyoo-no ki-ni so-re-o ueta sotugyoosei-no namae-ga kizamareteiru rasii.90% or:more-GEN tree-on that-thing-ACC planted graduate-GEN name-NOM be:engraved seem

'It seems that on each of 90% or more trees is engraved the name of the graduate who has planted it.'

ResultsThe results of the experiment are provided in ().

()Number of informants who accepted it57

Mean Score Standard Deviation

(a) 0 out of 20 −1.80 0.40(b) 0 out of 20 −2.00 0.00

57 As before, the score of "+1" or "+2" is taken in the context of this particular exposition as the example being judged to be acceptable.

Chapter 2: document.doc19/33

Page 20: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

(a) 20 out of 20 +2.00 0.00(b) 20 out of 20 +1.95 0.22

The speaker judgments are extremely robust, indicating that what we have just observed qualifies as a repeatable phenomenon.58

Weak crossover, reconstruction and the OS Construction in Japanese The Initial Observation

The availability of a co-variant interpretation is constrained not only by the property of the item that is intended to be the 'dependent term' but also by some structural condition(s) between the 'dependent term' and its 'antecedent'. This is illustrated in (), to be contrasted with ().

() a. *[so-ko-no oya-gaisya]-ga A-sya-ni-sae monku-o itta that-place-GEN parent-company-NOM A-company-DAT-even complaint-ACC said

'[its parent company] complained to even Company A'

b. *?[So-ko-no oya-gaisya]-ga do-no zidoosya-gaisya-ni monku-o itta no? that-place-GEN parent-company-NOM which-GEN automobile-company-DAT complaint-ACC said COMP

'To which automobile company did [its parent company] complain? '

() a. A-sya-sae-ga [so-ko-no oya-gaisya]-ni monku-o itta A-company-even-NOM that-place-GEN parent-company-NOM complaint-ACC said

'even Company A complained to [its parent company].'

b. Do-no zidoosya-gaisya-ga [so-ko-no oya-gaisya]-ni monku-o itta no?which-GEN automobile-company-NOM that-place-GEN parent-company-NOM complaint-ACC said COMP

'Which automobile company complained to [its parent company]?'

In both () and (), the 'dependent term' is a so-NP; yet the intended covariant interpretation is hardly available in (), in contrast to ().

One might suggest that the contrast between () and () is analogous to that in () in English.

() a. every teacher looked at his studentb. *?his student looked at every teacher

Let BVA(A, B) express an intuition that (i) B does not have an inherent referent of its own, and (ii) its value covaries with that of A; e.g., BVA(every teacher, his) as it is intended in (a) expresses the intuition that has often been stated as in (a).

() a. For all x, x a teacher, x looked at x's studentb. For all x, x a teacher, x's student looked at x

It is commonly understood that (b) cannot be interpreted as (b) while (a) can be interpreted as (a). The failure to obtain the bound variable construal for his in examples like (b) is often dubbed as weak crossover effects.

The status of examples like (b), in contrast to that of (a), has been attributed in the past literature to conditions such as (a), (b), or (c), as restated in terms of the general assumptions adopted here about the properties of the CS.

() A necessary condition for BVA(A, B), as proposed in the past:a. The precedence condition59: A must precede B.b. The c-command condition at surface structure (Reinhart 1976): A must c-command B at the point of Spell-

58 The so-called unaccusative predicate is used in the experiment reported here, (see (b) and (b)); but the examples of the forms in (b) and (b), such as (a, b, c), should give the same results. [We plan to conduct an experiment to confirm that.]59 The proposal made in Chomsky 1976, later named as leftness condition in Higginbotham 1980:687, is as in (i).(i) (Chomsky 1976:(105))

A variable cannot be the antecedent of a pronoun to its left.

Chapter 2: document.doc20/33

Page 21: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

out.c. The c-command condition at LF (Hoji 2003, earlier references on this will be added): The trace of A must

c-command B at LF.

The contrast between () and () (and that between (a) and (b)) can be accounted for either by a precedence-based condition such as (a) or by a c-command-based condition such as (b) and (c).60

Two types of dependency in Ueyama 1998Ueyama (1998) argues that there are two distinct structural bases for BVA(A, B); one makes reference to PF

precedence and the other to LF c-command. According to Ueyama 1998 and subsequent related works, the availability of BVA with 'binder-bindee' pairs such as those given in () is subject to the LF c-command condition.

() a. NP-sae ('even NP'), so-ko ('that place/it')b. NP-dake ('only NP'), so-ko ('that place/it')c. 10 izyoo-no NP ('ten or more NPs'), so-ko ('that place/it')

The necessary condition for BVA(A, B) in such cases can be stated as in (), somewhat informally.61

() BVA(A, B) is possible only if the trace of A c-commands B at LF.

Among the relevant assumptions are those given in ().

() a. NPs such as NP-sae 'even NP', NP-dake 'only NP', and 10 izyoo-no NP 'ten or more NPs' cannot be used to refer to a specific entity/individual whose cardinality is one.

b. So-ko is singular-denoting, as observed in Hoji 1995:NELS, 1997:JK, and as illustrated in ().

() a. Tom1-ga Nick2-ni [CP CIA-ga karera1+2-o sirabeteiru to] tugeta.Tom-NOM Nick-DAT CIA-NOM them-ACC is:investigating COMP told

'Tom1 told Nick2 [that the CIA was investigating them1+2].'

b. *Toyota1-ga Nissan2-ni [CP CIA-ga soko1+2-o sirabeteiru to] tugeta. Toyota-NOM Nissan-DAT CIA-NOM it-ACC is:investigating COMP told

'Toyota1 told Nissan2 [that the CIA was investigating it1+2].'

In accordance with the results reported in Hoji 2003, let us assume (); cf. also Hoji 1985.62

60 I suppress the issues concerning wh-questions with overt wh-movement and how we could generalize each of () to cover them. It should also be noted that there must be an independent condition in (i) below, which seems to me to be independent of the properties of the CS proper, and it might well be part of the faculty of logic although some works in the past maintain that the crucial necessary condition for BVA(A, B) is (i) and what is intended by () is an additional—but presumably less crucial—requirement for the availability of BVA(A, B).(i) BVA(A, B) is possible only if at LF B is in the scope of A. 61 The conditions are given in Ueyama 1998:155, section 3.3 as follows.(i) (Ueyama 1998: 155, (64))

A dependent term can enter into BVA only if either FD(,) or ID(,) is established.(fn.30)(fn.30: (64) is an informal statement whose content will be explicated in the subsequent chapters. Meanwhile '' in FD(,) refers to the QR-trace of the QP, while '' in ID(,) refers to the QP itself. )

(ii) (Ueyama 1998: 155, (65))a. Structural condition on FD:

*FD(,) if does not c-command at LF.b. Lexical condition on FD:

*FD(,) if is a largeNP.(iii) (Ueyama 1998: 155, (66))

a. Structural condition on ID:*ID(,) if does not precede at PF.

b. Lexical condition on ID:*ID(,) if is an A-type QP.

Chapter 2: document.doc21/33

Page 22: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

() There is a strong tendency for a phonetic string of the schematic form in (a) to get 'parsed' in such a way as to 'give rise to' a numeration that would necessarily result in the LF representation schematized in (b), leaving aside the possible application of LF adjunction.

() a. A-ga B-o V-{ta/ru}b. [TP [VP A-ga [V' B-o V]]-{ru/ta}]

Under this assumption, the ga-marked phrase asymmetrically c-commands the o-marked phrase in the LF representation corresponding to (a), suppressing the possible application of movement at LF. The assumption is shared by most of the practitioners in the field, as far as the asymmetrical structural relation is concerned between the ga-marked NP and the o-marked NP in (a).

*Schemas and an okSchemaGiven the assumption adopted here that the availability of BVA with a 'binder-bindee' pair such as those in () is

subject to condition in ()—we repeat () and () for convenience—we are now in a position to state the *Schemas in ().

() a. NP-sae ('even NP'), so-ko ('that place/it')b. NP-dake ('only NP'), so-ko ('that place/it')c. 10 izyoo-no NP ('ten or more NPs'), so-ko ('that place/it')

() BVA(A, B) is possible only if the trace of A c-commands B at LF.

() a. *Schema1

[ … so-ko … ]-ga QP-o VBVA(QP, so-ko)

b. *Schema2

[ … QP-o … V-ta N]-ga [ … so-ko … ]-o VBVA(QP, so-ko)

Since the QP fails to c-command so-ko in (), the trace of the QP would not c-command so-ko. Hence BVA(QP, so-ko) is predicted to be impossible in examples that conform to (a) or (b).

Given in () is a corresponding okSchema, where QP c-commands so-ko.

() okSchema1-1 / okSchema2-1

QP-ga [ … so-ko … ]-o VBVA(QP, so-ko)

Since the QP c-commands so-ko in (), the trace of the QP would c-command so-ko. Hence the LF representation of examples conforming to the okSchemas in () would satisfy the condition in ().

The OS constructions in JapaneseWe would now like to turn to the so-called OSV order in Japanese, often referred to as a 'scrambling

construction'. It has been well known, and as discussed extensively since the mid 1980s, that the O(bject) in the OSV order (let us call it the OS construction, following Ueyama 1998) can exhibit so-called A-properties or A'-properties. Ueyama (1998) proposes to account for this by hypothesizing that a sentence of the OSV order can correspond to two distinct numerations, derivations and representations. One type is such that the O is 'base-generated' in a position c-commanding the rest of the structure and the O is related to 'its theta' position much as in the case of Chomsky's (1977) analysis of the tough construction in English. The other type is such that the O is 'base-generated' and remains in its theta-position throughout the derivation to LF and is adjoined to the sentence-initial position only at PF, resulting in the surface OS order. In Ueyama 1998, the first type is called the Deep OS type and the second the Surface OS type. The crucial properties of the two types of the OS constructions as proposed in Ueyama 1998 are illustrated in () and (), making reference to the surface string in ().

62 One might find the exposition here somewhat convoluted. I will discuss later what other 'parsing' (and numeration and LF representation) may be possible corresponding to a phonetic string of the form in (a).

Chapter 2: document.doc22/33

Page 23: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

() susi-o John-ga tabe-tasushi-ACC John-NOM eat-past

'(Lit.) sushi, John ate'

() Deep OS type:a. Numeration: {John, susi, ec, tabe, ta, -ga, -o}b. PF: [TP susi-o [TP [VP John-ga [ ec tabe]]-ta]]c. LF: [TP susi-o [TP ec [TP [VP John-ga [ t tabe]]-ta]]]

() Surface OS type:a. Numeration: {John, susi, tabe, ta, -ga, -o}b. PF: [TP susi-o [TP [VP John-ga [ __ tabe]]-ta]]c. LF: [TP [VP John-ga [ susi-o tabe]]-ta]

In (), susi-o is in an A-position, serving at the level of semantic representation as the subject of a -predicate, which the larger of the two TPs in (c) will be 'mapped to'. In (), on the other hand, the LF representation for susi-o John-ga tabe-ta is identical to that for its S(ubject) O(bject) counterpart, with the placement of the object NP at the sentence-initial position being due to PF movement.

More okSchemasCorresponding to the *Schema in (a), we thus have the okSchemas in (), in addition to (). (a) and () are repeated

here.

() a. okSchema1-2

QP-o [ … so-ko … ]-ga Vb. okSchema1-3

[ … so-ko … ]-o QP-ga V

() okSchema1-1 / okSchema2-1

QP-ga [ … so-ko … ]-o VBVA(QP, so-ko)

() a. *Schema1

[ … so-ko … ]-ga QP-o VBVA(QP, so-ko)

The claim in regard to the okSchemas in () is that there can be a numeration corresponding to the okExamples conforming to (a) or (b) such that an LF representation would result based on such a numeration in which the condition for BVA(A, B) specified in (), repeated here, would be satisfied.

() BVA(A, B) is possible only if the trace of A c-commands B at LF.

As suggested in the preceding discussion, whether a phonetic string conforming to (a) or (b) can be regarded (by an informant) as giving rise to the BVA(QP, so-ko) depends in part upon what numeration she 'goes to'. Take a phonetic string that conforms to (a), for example. If the informant 'goes to' a 'Surface OS numeration', so to speak, the resulting LF representation is as schematized in () (not representing the raising of QP at LF), being identical to that for (a). Hereafter, -ta/-ru (past/present) is not represented in the schematic forms.

() a. PF: [QP-o [[ … so-ko … ]-ga V]]b. LF: [[ … so-ko … ]-ga [QP-o V]]

The QR-trace of the QP would not c-command so-ko in (b), hence the condition in () would not be satisfied, much as in the case of (a). BVA(QP, so-ko) should therefore be unavailable if the informant 'went to' a 'Surface OS numeration', so to speak. BVA(QP, so-ko) would be possible only if the informant 'goes to' a 'Deep OS numeration', so to speak, so as to 'arrive at' the pair of representations as schematized in () corresponding to (a) (again not representing the raising of QP at LF).

Chapter 2: document.doc23/33

Page 24: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

() a. PF: [QP-o [[ … so-ko … ]-ga ec V]]b. LF: [QP-o [ ec [[ … so-ko … ]-ga t V]]]

The QR-trace of QP in (b) would c-command so-ko.Likewise, in the case of a phonetic string that conforms to (b), BVA(QP, so-ko) should be possible only if the

informant 'goes to' a 'Surface OS numeration,' so to speak, as schematized in () instead of 'going to' a 'Deep OS numeration,' so to speak, as schematized in (), with the same proviso as above in regard to (b) and (b).

() a. PF: [[ … so-ko … ]-o [QP-ga __ V]]b. LF: [QP-ga [ … so-ko … ]-o V]

() a. PF: [[ … so-ko … ]-o [QP-ga [ ec V]]])b. LF: [[ … so-ko … ]-o [ ec [QP-ga [ t V]]]]

According to the view adopted here, much of judgmental fluctuation and instability in regard to the availability of BVA in the OS construction can therefore be attributed to "parsing" as it contributes to the 'selection' of a numeration.63

LF-c-command-based BVA ParadigmsThe *Schemas and okSchemas in (), () and () are repeated here.

() a. *Schema1

[ … so-ko … ]-ga QP-o VBVA(QP, so-ko)

b. *Schema2

[ … QP-o … V-ta N]-ga [ … so-ko … ]-o VBVA(QP, so-ko)

() okSchema1-1 / okSchema2-1

QP-ga [ … so-ko … ]-o VBVA(QP, so-ko)

() a. okSchema1-2

QP-o [ … so-ko … ]-ga Vb. okSchema1-3

[ … so-ko … ]-o QP-ga V

Consider the examples in ().64

() a. okExample1-1-1

Toyota-sae-ga so-ko-no kogaisya-o uttaeta.Toyota-even-NOM that-place-GEN subsidiary-ACC sued

'Even Toyota sued its subsidiaries.'

b. *Example1-1

So-ko-no kogaisya-ga Toyota-{o-sae/sae-o} uttaeta.that-place-GEN subsidiary-NOM Toyota-{ACC-even/even-ACC} sued

'Its subsidiaries sued even Toyota.'

c. okExample1-3-1

63 Chapter 3: section 2 provides further discussion regarding the view of the Parser adopted here.64 [Clearly, a different order and organization of presentation of the examples in () would be desireable for easier reading. I am keeping the present presentation simply because the revision would be somewhat time-consuming. At a later point of the revision process, I will perhaps have to work on the presentation here.]

Chapter 2: document.doc24/33

Page 25: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

So-ko-no kogaisya-o Toyota-sae-ga uttaeta.that-place-GEN subsidiary-ACC Toyota-even-NOM sued

'Its subsidiaries, even Toyota sued.'

d. okExample1-2-1

Toyota-{o-sae/sae-o} so-ko-no kogaisya-ga uttaeta.Toyota-{ACC-even/even-ACC} that-place-GEN subsidiary-NOM sued

'Even Toyota, its subsidiaries sued.'

e. *Example2-1

[Kyonen Nissan-ga Toyota-{o-sae/sae-o} uttaeta saiban]-galast:year Nissan-NOM Toyota-{ACC-even/even-ACC} sued law-suit-NOM

so-ko-o toosan-ni oiyatta.that-place-ACC bankruptcy-to forced

'The lawsuit(s) that Nissan sued even Toyota last year forced it to bankruptcy.'

The *Examples in (b) and (e) instantiate the *Schemas in (a) and (e), respectively, and the okExamples in (a), (c), and (d) instantiate the okSchemas in (), (b), and (a), respectively. We thus make the following predictions on the examples on ():

() Predictions:Prediction

(a) N/A(b) Unacceptable(c) N/A(d) N/A(e) Unacceptable

We do not predict (a), (c), and (d) to be acceptable, strictly speaking; the claim is that there can be a numeration corresponding to those examples that would result in the LF representations in which the necessary condition for BVA in () is satisfied, and, for the reasons noted above, that does not necessarily mean that every informant will judge these examples to be acceptable with the BVA in question. It is crucial, however, for (a), (c), and (d) to be judged significantly more acceptable than (b) and (e) for the reasons discussed above.

ResultsThe results of the experiment are summarized in ().

()Number of informants who accepted it

Mean Score Standard Deviation

Corresponds to:

(a) 30 out of 30 +2.00 0.00 okSchema1/2 in ()(b) 3 out of 29 −1.66 1.03 *Schema1 in (a)(c) 25 out of 29 +1.31 1.23 okSchema1/2 in (b)(d) 25 out of 29 +1.55 1.10 okSchema1/2 in (a)(e) 2 out of 28 −1.54 1.09 *Schema2 in (b)

The informant judgments among the (nearly 30) informants on (b) and (e) are fairly consistent. By contrast, all of the 30 informants reported that the BVA in question was available in (a). The BVA is much more readily available in the OS constructions (see (c) and (d)) than in the case of (b) and (e) although it is not as clearly available as in the case of (a). This is as expected for the reasons discussed in .

Similar results obtain with the Toyota dake 'only Toyota' and so-ko pair, as indicated below.

() a. Toyota-dake-ga so-ko-no kogaisya-o uttaeta.Toyota-only-NOM that-place-GEN subsidiary-ACC sued

'Only Toyota sued its subsidiaries.'

Chapter 2: document.doc25/33

Page 26: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

b. So-ko-no kogaisya-ga Toyota-o-dake uttaeta.that-place-GEN subsidiary-NOM Toyota-ACC-only sued

'Its subsidiaries sued only Toyota.'

c. So-ko-no kogaisya-o Toyota-dake-ga uttaeta.that-place-GEN subsidiary-ACC Toyota-even-NOM sued

'Its subsidiaries, only Toyota sued.'

d. Toyota-o-dake so-ko-no kogaisya-ga uttaeta.Toyota- ACC-only that-place-GEN subsidiary-NOM sued

'Only Toyota, its subsidiaries sued.'

e. [Kyonen Nissan-ga Toyota-o-dake uttaeta saiban]-galast:year Nissan-NOM Toyota-ACC-only sued law-suit-NOM

so-ko-o toosan-ni oiyatta.that-place-ACC bankruptcy-to forced

'The lawsuit(s) in which Nissan sued even Toyota last year forced it to bankruptcy.'

()Number of informants who accepted it

Mean Score

Standard Deviation

Corresponds to:

(a) 26 out of 26 +1.96 0.19 okSchema1/2 in ()(b)65 1 out of 25 −1.68 0.88 *Schema1 in (a)(c) 24 out of 26 +1.50 1.08 okSchema1/2 in (b)(d) 18 out of 24 +1.12 1.33 okSchema1/2 in (a)(e) 5 out of 23 −1.04 1.65 *Schema2 in (b)

Essentially the same results also obtain with the pair of 10 izyoo-no zidoosya gaisya '10 or more automobile companies' and so-ko 'that place, it', as illustrated below.

() a. 10-izyoo-no zidoosyagaisya-ga so-ko-no kogaisya-o uttaeta.10:or:more-GEN auto:company-NOM that-place-GEN subsidiary-ACC sued

'Each of 10 or more automobile companies sued its subsidiaries.'

b. So-ko-no kogaisya-ga 10-izyoo-no zidoosyagaisya-o uttaeta.that-place-GEN subsidiary-NOM 10:or:more-GEN auto:company-ACC sued

'Its subsidiaries sued each of 10 or more automobile companies.'

c. So-ko-no kogaisya-o 10-izyoo-no zidoosyagaisya-ga uttaeta.that-place-GEN subsidiary-ACC 10:or:more-GEN auto:company-NOM sued

'Its subsidiaries, each of 10 or more automobile companies sued.'

d. 10-izyoo-no zidoosyagaisya-o so-ko-no kogaisya-ga uttaeta.10:or:more-GEN auto:company-ACC that-place-GEN subsidiary-NOM sued

' Each of 10 or more automobile companies, its subsidiaries sued.'

e. [Kyonen Nissan-ga 10-izyoo-no zidoosyagaisya-o uttaeta saiban]-galast:year Nissan-NOM 10:or:more-GEN auto:company-ACC sued law-suit-NOM

65 The low acceptability of (b) cannot be attributed simply to the NP-o-dake sequence, which some speakers find marginal. Six out of the 24 speakers in fact marked (d), which also contains the NP-o-dake sequence, as "−1" or "−2," and none of them accepted (b), which is as expected if these speakers do not like the NP-o-dake sequence. Eighteen out of the 24 speakers however accepted (d); hence they can accept the NP-o-dake sequence. None of those 18 speakers, however, accepted (b). This suggests that the status of (b) for those 18 speakers cannot be attributed simply to the NP-o-dake sequence.

Chapter 2: document.doc26/33

Page 27: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

so-ko-o toosan-ni oiyatta.that-place-ACC bankruptcy-to forced

'The lawsuit(s) in which Nissan sued each of 10 or more automobile companies last year forced it to bankruptcy.'

()Number of informants who accepted it

Mean Score

Standard Deviation

Corresponds to:

(a) 27 out of 27 +1.93 0.26 okSchema1/2 in ()(b) 2 out of 27 −1.59 0.95 *Schema1 in (a)(c) 20 out of 27 +1.30 1.21 okSchema1/2 in (b)(d) 22 out of 27 +1.26 1.40 okSchema1/2 in (a)(e) 6 out of 27 −1.00 1.49 *Schema2 in (b)

Precedence-based BVAWe have assumed, following Ueyama 1998, that the condition in () can be clearly detected only if we consider

BVA involving a pair such as those given in (). We repeat () and () again for convenience.

() a. NP-sae ('even NP'), so-ko ('that place/it')b. NP-dake ('only NP'), so-ko ('that place/it')c. 10 izyoo-no NP ('ten or more NPs'), so-ko ('that place/it')

() BVA(A, B) is possible only if the trace of A c-commands B at LF.

It is also observed in Ueyama 1998 that BVA involving a pair such as () need not be based on the LF c-command and its availability can instead be sensitive to PF precedence although it can also be based on LF c-command; see (iii) in footnote Error: Reference source not found.66

() a. do-no NP 'which NP', so-ko ('that place/it')b. do-no NP-mo 'which NP also', so-ko ('that place/it')

Now, if we did not recognize the two types of dependency in Ueyama 1998, i.e., if we assumed that () were to hold regardless of the choice of a bindee-bindee pair, as much of the works in the field seem to, we would have the consequence in (). Let us call such an approach a uniform approach for the ease of exposition.

() The uniform approach:BVA(dono N, soko) is available only if the trace of dono N c-commands so-ko at LF.

Under the uniform approach, we would therefore obtain the following *Schemas and okSchemas.

() *Schema under (), (Cf. ().):a. *Schema1

[ … so-ko … ]-ga QP-o VBVA(QP, so-ko)

b. *Schema2

[ … QP-o … V-ta N]-ga [ … so-ko … ]-o V

() okSchemas under (), (Cf. ().):a. okSchema1-1 / okSchema2-1

QP-ga [ … so-ko … ]-o Vb. okSchema1-2

QP-o [ … so-ko … ]-ga V

66 It is proposed in Ueyama 1998 that the relevant properties follow from the theory of anaphoric dependency proposed there; but we will not present the details of the proposal here; see sections xxx above and Ueyama 1998: xx for details.

Chapter 2: document.doc27/33

Page 28: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

c. okSchema1-3

[ … so-ko … ]-o QP-ga V

The paradigm in () contains examples conforming to () and (). More specifically, (a) conforms to the okSchema in (a), (b) to *Schema in (a), (c) to okSchema in (c), (d) to okSchema in (b), and (e) to *Schema in (b).

() a. Do-no zidoosyagaisya-ga so-ko-no kogaisya-o uttaeta no?which-GEN auto:company-NOM that-place-GEN subsidiary-ACC sued

'Which automobile company sued its subsidiaries?'

b. So-ko-no kogaisya-ga do-no zidoosyagaisya-o uttaeta no?that-place-GEN subsidiary-NOM which-GEN auto:company-ACC sued

'Its subsidiaries sued which automobile company?'

c. So-ko-no kogaisya-o do-no zidoosyagaisya-ga uttaeta no?that-place-GEN subsidiary-ACC which-GEN auto:company -NOM sued

'Its subsidiaries, which automobile company sued?'

d. Do-no zidoosyagaisya -o so-ko-no kogaisya-ga uttaeta no?which-GEN auto:company -ACC that-place-GEN subsidiary-NOM sued

'Which automobile company, its subsidiaries sued?'

e. [Kyonen Nissan-ga do-no zidoosyagaisya-o uttaeta saiban]-galast:year Nissan-NOM which-GEN auto:company ACC sued law-suit-NOM

so-ko-o toosan-ni oiyatta no?that-place-ACC bankruptcy-to forced

'The lawsuit(s) in which Nissan sued which automobile company last year forced it to bankruptcy?'

Under the uniform approach, the speaker judgments on (b) and (e) would be predicted to be quite bad, in contrast to those on (a), (c), and (e), as indicated in ().

() Predictions under the uniform approach:Predictions

(a) N/A(b) Unacceptable(c) N/A(d) N/A(e) Unacceptable

Under Ueyama 1998, on the other hand, only (b) is a *Example among the examples in (). The predictions under Ueyama 1998 are summarized in Let us summarize in ().

() Predictions under Ueyama 1998:Predictions

(a) N/A(b) Unacceptable(c) N/A(d) N/A(e) N/A

The crucial difference between the two approaches has to do with (e). Ideally, we would expect the mean scores on the examples in () as in () under these two approaches.

() Under the the uniform approach:

Chapter 2: document.doc28/33

Page 29: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

Predictions(a) N/A(b) −2(c) N/A(d) N/A(e) −2

b. Under Ueyama 1998:Predictions

(a) N/A(b) −2(c) N/A(d) N/A(e) N/A

As noted in relation to (), the examples marked with "N/A" are not, strictly speaking, predicted to be acceptable while those that are marked "−2" are predicted to be unacceptable. We, however, do expect a significantly more acceptable status on the former than on the latter.

ResultsThe results of the experiment are summarized in ().

()Number of informants who accepted it

Mean Score

Standard Deviation

Corresponds to:

(a) 25 out of 25 +2.00 0.00 okSchema1/2-1 in (a)(b) 3 out of 25 −1.24 1.07 *Schema2 in (a)(c) 22 out of 25 +1.48 1.20 okSchema1-3 in (c)(d) 23 out of 25 +1.60 1.10 okSchema1-2 in (b)(e) 20 out of 25 +1.04 1.56 *Schema2 in (b)

The informant judgments on (a-d) and those on (a-d) seem more or less comparable to each other. We repeat () here.

()Number of informants who accepted it

Mean Score Standard Deviation

Corresponds to:

(a) 30 out of 30 +2.00 0.00 okSchema1/2-1 in ()(b) 3 out of 29 −1.66 1.03 *Schema1 in (a)(c) 25 out of 29 +1.31 1.23 okSchema1/2 in (b)(d) 25 out of 29 +1.55 1.10 okSchema1/2 in (a)(e) 2 out of 28 −1.54 1.09 *Schema2 in (b)

Given the mean score on (b) in contrast to those on (a), (c), (d), one might be tempted to conclude that we have obtained repeatable phenomena under the uniform approach; see (). The informant judgments on (e), however, are quite unexpected under the uniform approach in (). If () were to hold, the informant judgments on (e) should be much worse than '+1.04'. Under Ueyama's (1998) approach, on the other hand, the difference between (e) and (e) is as expected. The results summarized in (), along with those summarized in (), (), and (), thus provide strong confirming evidence in support of Ueyama's (1998) two types of dependency and argue against the uniform approach.

The informant judgments on the examples in (), as summarized in (), provide similar confirmation.

() a. Do-no zidoosyagaisya-mo so-ko-no kogaisya-o uttaetawhich-GEN auto:company-also that-place-GEN subsidiary-ACC sued

'Every automobile company sued its subsidiaries.'

Chapter 2: document.doc29/33

Page 30: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

b. So-ko-no kogaisya-ga do-no zidoosyagaisya-mo uttaeta.that-place-GEN subsidiary-NOM which-GEN auto:company-also sued

'Its subsidiaries sued every automobile company.''

c. So-ko-no kogaisya-o do-no zidoosyagaisya-mo uttaeta.that-place-GEN subsidiary-ACC which-GEN auto:company -also sued

'Its subsidiaries, every automobile company sued.'

d. Do-no zidoosyagaisya-mo so-ko-no kogaisya-ga uttaeta.which-GEN auto:company -also that-place-GEN subsidiary-NOM sued

'Every automobile company, its subsidiaries sued.'

e. [Kyonen Nissan-ga do-no zidoosyagaisya-o uttaeta saiban]-molast:year Nissan-NOM which-GEN auto:company ACC sued law-suit-also

so-ko-o toosan-ni oiyatta.that-place-ACC bankruptcy-to forced

'Every lawsuit in which Nissan sued an automobile company last year forced it to bankruptcy.'

()Number of informants who accepted it

Mean Score

Standard Deviation

Corresponds to:

(a) 25 out of 25 +2.00 0.00 okSchema1/2-1 in (a)

(b) 6 out of 25 −0.76 1.42 *Schema2 in (a)

(c) 23 out of 25 +1.52 1.10 okSchema1-3 in (c)

(d) 21 out of 24 +1.58 0.91 okSchema1-2 in (b)

(e) 20 out of 25 +1.16 1.38 *Schema2 in (b)

Concluding remarks

The primary object of our inquiry is the Computational System (CS), hypothesized to be at the center of the language faculty. It is compeptence, as opposed to performance, that constitutes our object of inquiry in the terms of Chomksy's (1965) distinction. Yet, the data available to us is performance, including the informant's linguistic judgments. It seems safe to assume that linguistic intuitions of the speaker arise in a variety of ways, i.e., get affected by various factors. It then follows that not every observation can be regarded as a direct, or even potentially revealing, reflection of the properties of the CS; that is to say, not every observation qualifies as an object of explanation for a theory concerned with the properties of the CS.

We maintain that in order to identify what may qualify as an object of explanation, it must first be established that the speakers have clear and robust judgments on examples of a certain schematic form, and consistently find them unacceptable under a specified interpretation. If the unacceptability of such examples—which we have referred to as *Examples—is indeed due to some CS-related property as hypothesized, the relevant informant judgments should be robust and should not be affected by pragmatic factors or by different choices of lexical items allowed to alter in the schema of which a *Example is an instance. Such a schema has been called a *Schema. Given that the status of *Example under interpretation is due to the hypothesized property P at LF, there should be examples minimally different from in which the relevant condition(s) for P is/are satisfied. We have referred to such examples as okExamples, and referred to a schema which the okExamples are instances of as an okSchema.

A repeatable phenomenon obtains when the informant judgments on a number of *Examples of a *Schema are as predicted (i.e., clearly unacceptable) and robust and when furthermore the okExamples of the corresponding

Chapter 2: document.doc30/33

Page 31: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

okSchemas are judged to be significantly more acceptable than the *Examples. When we obtain a repeatable phenomenon, we can be hopeful that the generalization in question is a reflection of some CS-related property. We have maintained that the informant intuitions that do not form a repeatable phenomenon have not (yet) attained the status of data in generative grammar; for there is no strong reason to suspect that they reflect properties of the CS in a way that would likely lead us to a discovery of the nature of the CS. We have further stressed the importance of building our research on repeatable phenomena and on hypotheses that are backed up by repeatable phenomena. A failure to do so is likely to lead to the research of the sort alluded to in chapter 1: section 1.1, making it unclear what empirical progress is being made or we can expect to make.

While it is not entirely clear how repeatability could be measured in the context of cross-linguistic empirical research, it seems useful to consider the issue in light of the thesis in () suggested above.67

() Across-speaker repeatability can be meaningfully addressed only if within-speaker repeatability (across-occasion and across-example repeatability) obtains.

That is to say, in accordance with the methodological considerations given above, it seems reasonable to conclude that a cross-linguistic empirical claim can be meaningfully addressed only if within-speaker repeatability obtains in regard to the issue/phenomenon in the language(s) under discussion68, and it would in fact be well to have achieved some degree of across-speaker repeatability as well. In other words, it seems rather senseless to address a cross-linguistic empirical claim without having obtained within-speaker repeatability and across-speaker repeatability in each of the languages under discussion. This seems rather common-sensical. But the point is perhaps worth making in light of the fact that a cross-linguistic study seems to often make crucial reference to an alleged generalization that falls far short of being a repeatable phenomenon, as in the case of the alleged generalizations discussed above regarding zibunzisin and otagai in Japanese.

Convincing the others (presumably the other practitioners in the field if not those outside the field), I believe, is part of science. Obtaining repeatability is a necessary condition for convincing the others, and that makes it imperative to develop a reliable experimental methodology to test the validity of one's hypotheses and especially a reliable method of evaluating the result of an experiment. But such methods have a function beyond convincing the others. It also has the function of making us feel willing to be convinced by others. One might find the point rather obscure if one thinks only about interaction among the native speakers of one's own language(s). Suppose one is evaluating someone else's work that deals with a language that one does not speak as one's native language. One can never be sure about the reliability of the generalizations presented in such work. What does one do then? Well, some people may simply assume that the presented generalizations are valid, i.e., that they assume that they are repeatable phenomena in the terms of the preceding discussion. Some people may do so only if the alleged generalizations would support what they are pursuing; and this seems to be a rather typical practice in the field as far as I can tell. Some other people may think like the following: "Well, maybe valid; but maybe not. So, I will take them as valid only if I detect something analogous in my own language, and until then I leave them in the category of "Maybe."

Now, we would have a rather different attitude if the alleged generalization were presented along with the relevant experiment(s) and its/their result(s)—which would presumably be accompanied by the *Schema(s), the okSchemas, and *Examples, and the okExamples. We would in that case be much more willing to accept the proposed generalizations as valid, i.e., as a repeatable phenomenon. Accepting such a repeatable phenomenon might in fact help us with our research on our own language since we would in that case have good reason to believe that, unless there is reason otherwise, the same generalization should hold in our own language, to the extent that the generalization is based on a universal statement. As briefly addressd in the text, however, much of cross-lingustic research seems to proceed without paying serious attention to whether an alleged generalization consititutes a repeatable phenomenon and we sometimes, if not often, observe an alleged generalization adopted despite a demonstration in published works that it fails to qualify as a repeatable phenomenon.

As noted above, a repeatable phenomenon is accepted as such only provisionally, and it must always be subjected to further scrutiny and rigorous attempts to invalidate it. The point is worth repeating here. As noted above, it is not sufficient, and the researcher should not be content, even if the results of a particular experiment have turned out to be in harmony with the proposed or predicted generalization. The point is to ensure (a), not (b).

67 This will be discussed more in depth in chapter 3.68 As suggested in the preceding discussion, we do not necessarily expect a high degree of repeatability on okExamples although we do on *Examples.

Chapter 2: document.doc31/33

Page 32: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

() a. No matter how many times we conduct an experiment on the validity of a repeatable phenomenon, *Examples conforming to the *Schema should be judged unacceptable.

b. There is an experiment or are some experiments whose result(s) is/are in harmony with a generalization in question.

The significance of an outcome of an experiment is thus qualitatively different in (a) than in (b).

() a. The result of an experiment invalidates the alleged generalization in question; i.e., some *Examples conforming to the *Schema are not found to be clearly unacceptable by (some) informants.

b. The result of an experiment supports the validity of the generalization in question; i.e., every *Example conforming to the *Schema is found to be clearly unacceptable, and the corresponding okExamples are judged significantly more acceptable.

The outcome in (a) is much more significant than that in (b). The result in (b) qualifies the generalization in question as a repeatable phenomenon, but only provisionally. The result in (a), on the other hand, disqualifies the alleged generalization as a repeatable phenomenon, and this time not just provisionally (although one can always check if the experiment has been conducted in a reliable manner).

The failure to understand this point might well result in one's adhering to an alleged but invalid generalization, despite the failure to demonstrate (a); one might continue to regard the result of the sort alluded to in (b) as being significant. The point can be illustrated as follows with some simplification. Researcher A puts forth a hypothesis that has a consequence that a certain interpretation is impossible in examples that confirm to a *Schema. Researcher B points out that there are some *Examples (conforming to that *Schema) that she and others find acceptable under the interpretation in question. Indeed, A's *Examples turn out to be okExamples for B that correspond to B's own *Example(s). The robust informant judgments have been independently observed for B's *Schema (i.e., for *Examples of B's *Schema). Researcher A responds by saying that he and other informants he consulted do not find such examples (i.e., the okExamples for B and the *Examples for A) acceptable. Researcher A thus takes that as evidence for his hypothesis in question, and continues to use it in the subsequent theoretical discussion. What Researcher A is missing must be clear in light of the preceding discussion. It is not difficult to imagine what could happen if such a move gets repeated with a number of hypotheses, i.e., with a number of hypotheses that are not backed up by a repeatable phenomenon; we would end up being in a situation of the sort alluded to in section 1.2 of chapter 1.

In summary, we maintain that the key for extracting from performance data what is likely to lead us to a discovery of properties of the Computational System is the recognition that the generative grammarian's task in fact includes the identification of the relevant data. Only by taking this point to heart and by focusing on and building on repeatable phenomena, do we have a hope of making generative grammar an empirical science and ultimately making it a progressive research program in the sense of Lakatos 1970, 1978. In chapter 3, we will articulate the conceptual basis of repeatable phenomena and address a number of related issues.

References (to be completed)

Chomsky, Noam. 1986. Knowledge of Language. New York: Praeger. Fiengo, Robert, and Robert May. 1994. Indices and Identity. Cambridge: MIT Press. Fukui, Naoki. 1986. A Theory of Categories Projection and Its Applications, Doctoral dissertation, MIT. Dalrymple Mary, Sam A. Mchombo and Stanley Peters. 1994. Semantic Similarities and Syntactic Contrasts

between Chichewa and English Reciprocals. Linguistic Inquiry 25: 145-63.Heim, Irene, Howard Lasnik and Robert May. 1991. Reciprocity and Plurality. Linguistic Inquiry 22: 63-102.Hoji, Hajime. 1990. Theories of Anaphora and Aspects of Japanese Syntax. Ms., USC.Hoji, Hajime. 1995a. Demonstrative Binding and Principle B. NELS 25: 255-271.Hoji, Hajime. 1995b. Null Object and Sloppy Identity in Japanese. to appear in Linguistic Inquiry.Hoji, Hajime. 1996a. Sloppy Identity and Formal Dependency. WCCFL 15 .Hoji, Hajime. 1996b. Sloppy Identity and Principle B. to appear in 'Atomism' and Binding, eds. H. Bennis et al.,

Foris Publications.Hoji, Hajime. 1996c. A Review of Japanese Syntax and Semantics by S-Y. Kuroda. to appear in Language.Hoji, H., 1998a. Formal Dependency, Organization of Grammar and Japanese Demonstratives. In: N. Akatsuka, H. Hoji, S.

Iwasaki, S.-O. Sohn, and S. Strauss (eds.), Japanese/Korean Linguistics 7, 649-677. Stanford: Center for the Study of Language and Information.

Chapter 2: document.doc32/33

Page 33: 6 The Formal Dependency system and the … · Web viewChapter Two Repeatable Phenomena Introdction The primary object of inquiry in generative grammar is the Computational System

Hoji, Hajime 2003 "Falsifiability and Repeatability in Generative Grammar: A Case Study of Anaphora and Scope Dependency in Japanese," Lingua, vol.113, No.4-6, pp.377-446.

Hoji, Hajime, S. Kinsui, Y. Takubo, and A. Ueyama. 2003. "Demonstratives in Modern Japanese," In A. Li, and A. Simpson, eds., Functional Structure(s), Form and Interpretation: Perspectives from East Asian Languages, Routledge, England, pp.97-128.

Huang, James. 1988. Comments on Hasegawa's Paper. In Proceedings of Japanese Syntax Workshop Issues On Empty Categories, eds. Tawa Wako and Mineharu Nakayama, 77-93.

Ishii, Yasuo. 1989. Reciprocal Predicates in Japanese. In Proceedings of the Sixth Eastern States Conference on Linguistics, eds. Ken deJong and Yongkyoon No, 150-161. The Ohio State University.

Kitagawa, Yoshihisa. 1986. Subjects in Japanese and English. Doctoral dissertation, University of Massachusetts, Amherst.

Kuno, Susumu and Soo-Yeon Kim. 1994. The Weak Crossover Phenomena in Japanese and Korean, in Japanese Korean Linguistics V. 4, ed. N. Akatsuka, 1-38. Stanford: CSLI.

Kuroda, S.-Y. 1988. Whether We Agree or Not: A Comparative Syntax of English and Japanese. Linguisticae Investigationes 12: 1-47.

Lasnik, Howard. 1989. On the Necessity of Binding Conditions, in Essays on Anaphora, ed. H. Lasnik, 149-167. Dordrecht: Kluwer Academic Publishers.

Lebeaux, David. 1983. A Distributional Difference between Reciprocals and Reflexives, Linguistic Inquiry 14: 723-30.

Miyagawa, Shigeru. 1997. Against Optional Scrambling, Linguistic Inquiry 28: 1-25.Nishigauchi, Taisuke. 1992. Syntax of Reciprocals in Japanese. Journal of East Asian Linguistics 1: 157-96.Pesetsky, David. 1982. Paths and Categories, Doctoral dissertation, MIT.Pollard, Carl and Ivan S. Sag. 1992. Anaphors in English and The Scope of Binding Theory, Linguistic Inquiry 23:

261-303.Saito, Mamoru. 1992. Long Distance Scrambling in Japanese. Journal of East Asian Linguistics 1: 69-118.Sorace, A. and F. Keller. 2005. "Gradience in linguistic data," Lingua 115, pp. 1497-1527.Ueda, Masanobu. 1984. On the Japanese Reflexive Zibun: A Non-parametrization Approach, Ms., University of

Massachusetts, Amherst.Ueyama, Ayumi. 1997. Scrambling in Japanese and Bound Variable Construal, to appear in 'Atomism' and Binding,

eds. H. Bennis et al., Foris Publications.Ueyama, Ayumi. 1998 Two Types of Dependency, Doctoral dissertation, University of Southern California, distributed by GSIL

publications, University of Southern California, Los Angeles.Yang, Dong-Whee. 1984. The Extended Binding Theory of Anaphors. Theoretical Linguistics Research 1: 195-

218.

Chapter 2: document.doc33/33