Youn 2014 Syntactic Complexity-pragma

Embed Size (px)

DESCRIPTION

Youn, 2014. Syntactic complexity and pragmatics

Citation preview

  • ity, 700

    a r t i c l e i n f o

    Article history:Received 12 November 2012Received in revised form 10 December 2013Accepted 20 December 2013

    Keywords:Syntactic complexity

    tical and method-rociency being asing hypotheses of2): (1) pragmatics

    precedes grammar and (2) grammar precedes pragmatics. The proposition of the primacy of pragmatics preceding grammaris supported by the universal pragmatics principle, which posits that adult L2 learners with established rst language (L1)pragmatic knowledge bring their discourse, pragmatic, and sociolinguistic competence to bear when learning L2 pragmatics.One example of the primacy of pragmatics is Wes, a Japanese L1 adult described in Schmidts (1983) study, who showed

    E-mail address: [email protected].

    Contents lists available at ScienceDirect

    System

    System 42 (2014) 2702871. Introduction

    Second language (L2) pragmatic development has increasingly received attention from diverse theoreological perspectives with the investigation of the relationships among pragmatics, grammar, and L2 pmajor area of interest (e.g., Bardovi-Harlig, 2000, 2001; Takahashi, 1996, 2001, 2005). Two seemingly oppothe developmental trajectories of grammar and pragmatics have been discussed (Kasper & Rose, 200expressing pragmatic functions. 2013 Elsevier Ltd. All rights reserved.L2 pragmaticsProciencyTask-based language assessment0346-251X/$ see front matter 2013 Elsevier Ltdhttp://dx.doi.org/10.1016/j.system.2013.12.008a b s t r a c t

    The study examines relationships among pragmatics, grammar, and prociency bycomparing the syntactic complexity of ESL learners written pragmatic production acrosstwo independent criterion measures: prociency and pragmatic performance. Participantswere 40 ESL learners who completed pragmatic assessment tasks. Pragmatic competencewas assessed by three trained raters using task-dependent analytical rating criteria. Syn-tactic complexity was assessed using three measures: (a) global complexity from meanlength of T-unit, (b) phrasal-level complexity from mean length of clause, and (c) subor-dination complexity from mean number of clauses per T-unit. The results showed thatlearners did not possess concomitant written pragmatic competence according to theirprociency levels. The global complexity measure was general enough to differentiatelevels in both prociency and pragmatic performance, compared to phrasal-level andsubordination complexity. Yet, the magnitudes of the three complexity measures differ-ences between pragmatic performance levels were more noticeable, compared to thosebetween prociency levels. Except for phrasal-level complexity, learners pragmatic per-formances were more highly correlated with syntactic complexity of their pragmaticproduction than their prociency levels. Pragmatically advanced learners produced longerutterances, more complex subclausal structures at the phrasal level, and more subordi-nation, suggesting the crucial roles played by syntactically complex structures inEnglish Department, Northern Arizona Univers S. Humphreys Cdr, PO Box 6032, Flagstaff, AZ 86011, USAMeasuring syntactic complexity in L2 pragmatic production:Investigating relationships among pragmatics, grammar, andprociency

    Soo Jung Youn

    journal homepage: www.elsevier .com/locate/system. All rights reserved.

  • Bardovi-Harlig (1999) proposed to broaden the eld of inquiry through investigating an interlanguage system of pragmatics.One of her proposals, which was grounded in consistent previous research ndings that high levels of grammatical

    S.J. Youn / System 42 (2014) 270287 271competence do not guarantee concomitant high levels of pragmatic competence (Bardovi-Harlig, 1999, p. 686), includedexamining the relationship between emergent pragmatic competence and interlanguage grammar. Among the wide range ofgrammatical aspects that are associated with interlanguage pragmatics, the tensemoodaspect system, the degree ofgrammatical complexity, and the use of formulaic expressions are closely related to pragmalinguistics (i.e., linguistic re-sources and strategies for conveying communicative action) (Leech, 1983; Thomas, 1983). Previous research ndings wellillustrate the importance of these grammatical choices in pragmatics. For instance, L2 learners failure to express pragmaticmeaning appropriately using the tensemoodaspect system is discussed in Bardovi-Harlig and Hartford (1993). The studyreported the use by non-native English speakers of the modal expression will (as in the Iwill take syntax) which indicates atoo strong commitment, and is different fromwhat a native English speaker would have said in the same situation, which is Iwas thinking of taking syntax (using past tense and progressive aspect as a mitigating device). Bardovi-Harlig and Hartfordargued that these examples show that non-native English speakers had a lack of sociopragmatic knowledge (i.e., socialperceptions underlying participants communicative action) (Leech, 1983), and as a result they failed to realize that theirproduction was inappropriate. At the same time, the ndings from both Takahashi (1996, 2001) and Bardovi-Harlig andHartford (1993) may also illustrate learners lack of control over the pragmatic meaning of L2 grammar, despite their well-established grammatical knowledge of forms.

    Several studies also explored how learners pragmatic awareness is related to grammatical awareness. Bardovi-Harlig andDrnyei (1998) compared how ESL learners and Hungarian EFL learners rated grammatical and pragmatic errors usingcarefully designed contextualized pragmatic and grammatical judgment tasks presented in a video format. The resultsshowed that error recognition for both grammar and pragmatics signicantly differed across ESL and EFL contexts. The ESLlearners scored pragmatic errors more severely while the EFL learners were more severe on grammatical errors. In a repli-cation of Bardovi-Harlig and Drnyeis study, Niezgoda and Rver (2001) and Schauer (2006) explored the same issue in thecontexts of Czech EFL and German EFL respectively, but mixed results were reported. Schauers results supported the Bardovi-Harlig and Drnyeis nding, namely that ESL learners were more sensitive to pragmatic violations than grammatical oneswhile EFL learners were more sensitive to grammatical errors. However, Niezgoda and Rver found that EFL and ESL envi-ronments had little effect on learners pragmatic and grammatical awareness since the Czech EFL group recognized morepragmatic errors and perceived them to be more serious than the ESL group did. These mixed results might be due to dif-ferences in the prociency levels of the participants. The Czech EFL group in Niezgoda and Rvers studywas highlymotivatedconsiderable pragmatic and discourse competence despite his limited knowledge of English grammar. On the other hand, alarge body of pragmatics literature supports the primacy of grammar over pragmatics, which explains that learners withestablished grammar still lack in utilizing their grammar ability to express appropriate illocutionary force. One example ofthis is syntactic complexity in Japanese L2 adult learners request production. Takahashi (1996, 2001) showed that Japaneselearners of English as a foreign language (EFL) preferred mono-clausal request expressions (e.g., Please or Would/Will you) tobi-clausal expressions (e.g., I was wondering if you could) even in highly imposing request situations, regardless of prociency.This nding indicates that L2 learners with established grammar and advanced prociency still lack in using various syntacticstructures in pragmatically appropriate ways, such as the use of past tenses and the progressive aspect with conditionalclauses as mitigation devices.

    The two contradicting hypotheses illustrate that grammar and pragmatics do not necessarily develop in a linear manner.L2 learners with advanced prociency may not necessarily demonstrate advanced pragmatic competence or a high level ofsyntactic complexity in their pragmatic production. The relationships among pragmatics, language prociency, and grammar(especially the ability to employ syntactically complex structures) remain an open question. The present study examinesthese relationships bymeasuring levels of syntactic complexity in thewritten pragmatic production of learners of English as aSecond Language (ESL) with regard to two independent criterion measures: prociency and pragmatic performance. In orderto elicit learners written pragmatic production, four pragmatic assessment tasks and the corresponding rating criteria weredeveloped drawing on politeness theory (Brown & Levinson, 1987). The notion of face is central to politeness theory, con-sisting of positive and negative face. Positive face refers to the desire of the individual to be approved of, while negative facerefers to the desire of the individual not to be imposed on. According to Brown and Levinson (1987), politeness is themanifestation of respect for an individuals face. Out of the various strategies employed to maintain politeness, grammar isparticularly pertinent in relation to the focus of this study. Showing positive face includes the use of indirect forms in ut-terances, such as modal verbs could or would. Saving negative face includes softening or hedging devices such as I wasthinking, we could perhaps, and if thats okay. These devices are used for the speaker not to impose on the listener wheneverthe speaker is making suggestions. In addition to the notion of face, Brown and Levinson identied three sociological variableswhich inuence politeness strategies: (a) the power difference between interlocutors, (b) the social distance between in-terlocutors, and (c) the absolute ranking of the face-threatening act. In the next sections, the literature on the relationshipsamong pragmatics, grammar, and prociency is reviewed.

    1.1. Interlanguage pragmatics and grammar

    Building on Kasper and Schmidts (1996) call for a stronger acquisitional focus within interlanguage pragmatics research,

  • S.J. Youn / System 42 (2014) 270287272and had a high aptitude for learning English compared with Bardovi-Harlig and Drnyeis Hungarian EFL group. Thispotentially implicates an L2 prociency effect on learners pragmatic competence.

    It should be noted that Bardovi-Harlig and Drnyeis study and the subsequent replication studies examined grammaticaland pragmatic awareness, rather than pragmatic performance; learners production data were not included in these studies.Analyses of learners production make it possible to examine how learners use various grammatical resources to expresspragmatic meaning, and can provide insight into the complex relationship between pragmatics and grammar (Kasper, 2009).Such research has been rare in the eld, however, except for a few notable studies (e.g., Fulcher & Mrquez Reiter, 2003;Taguchi, 2007a). Thus, the current study focuses on the analysis of learners pragmatic production to shed new light onunderstanding the relationship between pragmatics and grammar.

    1.2. Interlanguage pragmatics and L2 prociency

    The theoretical models on L2 competence conceptualize pragmatic competence as a part of language prociency (e.g.,Bachman & Palmer, 1996). However, the relationship between L2 prociency and pragmatics has been unclear in previousstudies, with mixed results being reported. Positive effects of prociency on pragmatics have been reported (e.g., Kasper &Roever, 2005; Roever, 2006; Taguchi, 2007a, 2011). For example, Roever (2006) reported that prociency was a strong fac-tor for the production of speech acts, which might be due to the studys focus on pragmalinguistic knowledge (i.e., thelanguage interface of pragmatics). Taguchi (2007a) also found a signicant prociency effect on the processing dimension ofpragmatic competence. Japanese EFL learners pragmatic production on requests and refusals were analyzed for overallappropriateness, planning time, and speech rate. Except for planning time, the positive prociency effect was found on overallappropriateness and speech rate.

    However, other studies showed a relatively weak L2 prociency effect. For instance, Matsumura (2003) found a strongercauseeffect relationship between Japanese EFL learners pragmatic competence and L2 exposure rather than prociency.Compared to Roevers (2006) nding on the strong prociency effect on the pragmalinguistic dimension of pragmatics,Matsumuras focus on the sociopragmatic dimension of pragmatics (i.e., social perceptions underlying participantscommunicative action) potentially explains the decreased prociency effect. Takahashi (2005) also reported a low correlationbetweenpragmalinguistic awareness and prociency. Instead, motivation subscales, such as intrinsic motivation and personalrelevance toward learning goals, were found to be more correlated with Japanese EFL learners L2 pragmalinguistic aware-ness. These ndings support the view that L2 prociency is not necessarily a primary factor in predicting learners L2pragmatic competence.

    The studies discussed above examined various dimensions of pragmatics using a wide range of testing instruments inorder to investigate the relationship between pragmatics and prociency. Examples include the measurement of pragmaticcompetence using a multiple-choice questionnaire (Matsumura, 2003), the investigation of pragmalinguistic knowledgeusing discourse completion tasks (Roever, 2006), and the evaluation of pragmalinguistic awareness of request forms using animmediate retrospective questionnaire (Takahashi, 2005). Despite the attempt to examine various dimensions of pragmatics,an explicit focus on the relationship between pragmatic performance and prociency has been rare. For this reason, theprevious researchs ndings have limited application to another aspect of pragmatics, namely learners pragmatic perfor-mance in real-life situations. Although researchers have increasingly attempted to assess pragmatic performance, such asHudson, Detmer, and Brown (1992, 1995), Taguchi (2007a), and Grabowski (2009), more research on an explicit relationshipbetween pragmatic performance and prociency is needed.

    In order to examine the theoretically and empirically unresolved relationships among pragmatics, grammar, and pro-ciency, the research gaps can be addressed in several ways. First, developing authentic pragmatic tasks based on learnersneeds, along with reliable and valid rating criteria, will enable us to measure pragmatic performance and to investigate itsexplicit relationshipwith L2 prociency. Secondly, a systematic investigation of syntactic complexity of L2 learners pragmaticproduction elicited from the authentic pragmatic tasks will help us to examine how pragmatically competent L2 learnersutilize grammar, particularly syntactically complex linguistic resources. The important roles of syntactically complexgrammar in pragmatics have been examined previously (e.g., Bardovi-Harlig, 1999; Bardovi-Harlig & Hartford, 1993; Taka-hashi, 1996, 2001). However, little has been reported about the types and degrees of syntactic complexity in differing levels ofpragmatic performance and how they vary from those found in different prociency levels. Thus, the present study proposesemploying three distinct syntactic complexity measures that tap global, phrasal-level, and subordination complexity, whichare discussed in the next section.

    1.3. Syntactic complexity measures

    Complexity, accuracy, and uency (e.g., Skehan, 1998) have been fundamental concepts in examining learner languagein the eld of Second Language Acquisition (SLA). Among these, complexity has been considered to be the mostcomplicated and ambiguous construct (Housen & Kuiken, 2009; Norris & Ortega, 2009). Two types of complexity havebeen identied. One is cognitive complexity caused by task types and the other is syntactic complexity. The current studyfocuses on the latter. Among varying denitions of syntactic complexity, Ortega (2003, p. 492) has dened it as the rangeof forms that surface in language production and the degree of sophistication of such forms. Three types of sub-measures of syntactic complexity have been commonly employed: overall complexity, coordination, and

  • S.J. Youn / System 42 (2014) 270287 273subordination. These sub-constructs of syntactic complexity develop differently over prociency levels (Norris & Ortega,2009). For example, coordination occurs earlier than subordination, and clause-internal complexication happens in alater stage of prociency development. The syntactic complexity of L2 learners language production has been widelymeasured not only in the eld of SLA but also in L2 writing studies to examine learners syntactic repertoire, grammaticaldevelopment, writing ability, and variations across different writing tasks (e.g., Biber, Gray, & Poonpon, 2011; Larsen-Freeman, 2006; Ortega, 2003; Wolfe-Quintero, Inagaki, & Kim, 1998). Despite its popularity, the measuring of syntacticcomplexity still presents challenges. For instance, as discussed by Norris and Ortega (2009), various complexity measuresentail either distinct or redundant sources of complexication, depending on the way they are calculated. Thus, as anattempt to address such challenge, three distinct measures that tap syntactic complexity multidimensionally withoutbeing redundant were selected in this study.

    2. Research questions

    The present study seeks to examine the relationships among pragmatics, particular aspects of grammar (syntacticcomplexity), and prociency by investigating three syntactic complexity measures of ESL learners written pragmatic pro-duction with regard to two independent criterion measures: prociency and pragmatic performance. In previous studies onthe relationship between prociency and syntactic complexity, it has been implicitly assumed that syntactic complexity ispositively related to prociency; however, the levels of syntactic complexity differ across prociency levels depending on sub-constructs in syntactic complexity (Norris & Ortega, 2009). Thus, the sub-constructs in syntactic complexity might differacross different pragmatic performance levels. If this is true, measuring different types of syntactic complexity of learnerspragmatic production will enable us to understand how learners with varying degrees of pragmatic competence utilizesyntactic structures in expressing pragmatic meaning. Additionally, a relationship between pragmatic performance andprociency will be explicitly and empirically addressed. For this reason, the assessment of ESL learners pragmatic perfor-mance in real-life situations, independent of learners prociency, is also an integral part of the present study. Two researchquestions guided this study:

    1. How do three measures of syntactic complexity of ESL learners written pragmatic production vary across different levelsof L2 prociency and pragmatic performance?

    2. What are the relationships among measures of learners prociency levels, pragmatic performances, and syntacticcomplexity?

    3. Method

    3.1. Participants

    The study participants were forty ESL students enrolled in a four-year American university. The language background ofthe participants included Japanese (n 14, 35%), Korean (n 11, 27.5%), Chinese (n 7, 17.5%), and others (Indonesian,Marathi, Marshallese, Spanish, Tamil, Thai, Turkish). Considering that the target learner population in this study was college-level ESL learners, participants with different L1s were inevitable. The examinees were categorized into three different levelsof language prociency according to the TOEFL internet-based test (iBT) score or to the classes that they were taking forthose who did not have the TOEFL iBT scores: (a) low-intermediate prociency learners studying in a university preparatoryESL program (n 9), with the TOEFL iBT test scores ranging from 33 to 72; (b) intermediate and high-intermediate pro-ciency international undergraduate or graduate students enrolled in an English for Academic Purposes (EAP) program tofulll university English language requirements (n 19), with the TOEFL iBT test scores ranging from 73 to 100; and (c)advanced prociency L2 learners of English studying at the American university who either scored above 100 on the TOEFLiBT test, took an expository writing 100-level class, or exited from the required university EAP program (n 12). Addi-tionally, the examinees were regrouped into three different pragmatic performance groups according to theirperformances on the pragmatic assessment tasks, independent of their prociency levels. No particular L1 was dominant ineach group.

    3.2. Pragmatic assessment tasks and task-dependent rating criteria

    Originally seven pragmatic assessment tasks, composed of four written and three spoken tasks, were developed based onthe pragmatic learning needs of 102 ESL students in an EAP context (Youn, 2010), following a task-based assessmentframework (Long & Norris, 2000; Norris, 2009). However, the present study only included the four written tasks as presentedin Table 1. Each task represented an authentic situation that requires pragmatic competence in an EAP setting, such aswritinga recommendation letter request e-mail to a professor and giving suggestions on classmates class work. In addition to instructionsand task-specic realia for each task (see Appendix A), task-dependent analytical rating criteria reecting both socio-pragmatic and pragmalinguistic features of each task were also developed based on qualitative analyses of examineesperformance data, as well as input from domain experts such as professors and employers (see Appendix B). Each criterionwas associated with one of the three levels, from 1 inadequate through 2 able to 3 good. For example, Task 1writing

  • a recommendation letter request e-mail to a professor was scored based on a detailed description of corresponding quality for

    coding the task of writing a request e-mail to a professor, the salutation line (e.g., Dear Professor) was not coded as anindependent T-unit since it ran the risk of affecting the overall mean length of T-unit signicantly mainly due to the

    S.J. Youn / System 42 (2014) 270287274formulaic part of the e-mail genre.In order to estimate the intra-rater reliability of the coding, the researcher coded the data three times, one or two months

    apart. Intra-rater reliability measures for all units were calculated, and the average reliability for all codings was 0.97. Anydiscrepancies in coding, mostly due to simple coding mistakes, were identied and corrected. Following Norris and Ortegas(2009) call for critical understanding on themultidimensionality of complexitymeasures, three complexitymeasures that tapdistinct sources of syntactic complexication were computed for each participants performance on individual task: globalcomplexity measures from mean length of T-unit (MLTU), phrasal-level complexity from mean length of clause (MLC), andsubordination complexity from mean number of clauses per T-unit (CTU).

    The Multi-faceted Rasch Measurement (MFRM) approach (Linacre, 1989) using the computer program FACETS, version3.61.0 (Linacre, 2006) was also employed (a) to examine rater behaviors and task characteristics and (b) to provide a basis forcreating groups of differing pragmatic performance levels. TheMFRM approach allowed the analyses of examinees abilities inrelation to the raters severities and tasks difculties, which provides more precise estimates of learners pragmatic per-formance compared to practices in classical testing theories (McNamara, 1996).three levels with four rating criteria: (a) tone of the e-mail, (b) ability to deliver a clear message and contextual knowledge, (c)appropriate use of formulaic linguistic expressions, and (d) appropriate e-mail format. The rating criteria measured thediverse dimensions of written pragmatic performance of certain writing genres. For example, the third criterion, appropriateuse of formulaic linguistic expressions, measured learners pragmalinguistic knowledge. The last criterion, appropriate e-mailformat, measured aspects, such as a proper e-mail subject line title and appropriate terms of address. The data were collectedin an individual session with each participant, which took approximately one hour. Participants were allowed to either typetheir answers on a computer or handwrite their answers on paper. The researcher retyped the handwritten answers to avoidany effect of handwriting quality on rating.

    Three trained raters scored the learners pragmatic performance using the task-dependent analytical rating criteria. Onerater was an English native speaker and the two others were advanced L2 users of English who all had at least two years ofESL/EFL teaching experience and an MA degree in ESL. They were blind to the identity or background of the participants aswell as to the goals of the study. The raters received three consecutive training sessions on using rating criteria consistently,each taking about one hour. All ratings were completed within a period of two weeks.

    3.3. Data analysis

    Syntactic complexity was examined using the CLAN (Computerized Language Analysis) computer program(MacWhinney, 2000). This program made it possible to perform various automatic analyses of coded data, including fre-quency counts, word searches, co-occurrence analyses, mean length of T-unit, and morphosyntactic analysis. Participantswritten pragmatic production data were converted into CHAT (Codes for the Human Analysis of Transcripts) format tofacilitate various searches and analyses using the CLAN program. Within the CHAT transcript le for each participantsresponse, T-units, independent, and dependent clauses were identied based on guidelines developed by Ortega, Iwashita,Rabie, and Norris (1999). An additional coding guideline was also developed for the present study. For example, whenTable 1Written pragmatic assessment tasks.

    Task Description

    1 Write a recommendation letter request e-mail to a professor2 Write an e-mail to a potential employer to send your application packet3 Write an e-mail to refuse a professors request of helping with your classmates class project4 Write constructive comments on a cover letter written by a classmate4. Results

    The results section starts with descriptive statistics for the four written pragmatic tasks, which are needed for inferentialstatistics (Chapelle & Duff, 2003). Results from the MFRM analysis using the FACETS program are provided rst, which servesas the basis for subsequent analyses. In particular, detailed measurement reports from the FACETS program allow us toexamine the examinees pragmatic abilities, raters performances, and tasks difculties, which become evidence for thevalidity and reliability of measuring pragmatic performance in the present study. Finally, results from one-way ANOVA an-alyses and correlation analyses using SPSS version 20 are presented to answer the rst and second research questionsrespectively.

  • 4.1. Descriptive statistics for the four pragmatic assessment tasks

    Table 2 shows the descriptive statistics for pragmatic performance scores across tasks and prociency levels. With amaximum score of 3 for all tasks, high prociency learners received the highest mean scores on all tasks while the oppositewas true for low prociency learners. Low prociency learners showed greater variability in score across all tasks. However,large variation was found across all prociency groups particularly for Task 4 giving constructive comments on a cover letterwritten by a classmate, which was the most difcult task.

    4.2. FACETS summary and measurement reports

    A FACETS summary shows the relative status of four facets (examinee, rater, task, and rating category) used in the study ina single set of relationships, which is shown in Fig.1. Each column represents a different facet including the relative abilities ofthe examinees, the relative harshness of the raters, the relative difculties of the assessment tasks, and rating category. Theperson reliability statistic was 0.93, which means that the pragmatic tasks quite reliably divided the examinees into differentlevels of pragmatic ability. Among the four criteria of the rating category, the third criterion used for all tasks, appropriate useof formulaic linguistic expressions, was the most difcult, indicating the learners lack of ability in utilizing linguistic ex-pressions for pragmatics.

    Detailedmeasurement reports for each tasks difculty from the FACETS analysis are shown in Table 3. The FACETS analysisemploys a true interval scale, the logit scale, and it is expressed with logit values. By convention, tasks of above-averagedifculty are indicated with a positive sign while tasks of below-average difculty are indicated with a negative sign inlogit values. All t values of the four written pragmatic tasks werewithin the range 0.75 and 1.3 (Bond & Fox, 2007), indicatingno tasks weremistting. This alsomeans that the four pragmatic tasks contributed tomeasuring one construct (i.e., pragmaticcompetence), which ensures construct validity of the test instruments. The logit values in Table 3 indicate each written tasksdifculty. Task 4, giving constructive comments on a cover letter written by a classmate, was identied as the most difcult taskwith the highest logit value of 0.67. The examinees unfamiliarity with a cover letter as a written genre and the high level ofpragmatic competence required in giving constructive comments made this task potentially challenging. Task 3, writing an

    S.J. Youn / System 42 (2014) 270287 275e-mail refusing a professors request to help with a classmates class project, was the easiest task with the lowest logit valueof 0.94. The examinees seemed to manage refusal of a professors request easily, possibly due to the familiarity with thesituation in an EAP setting or the relatively simple syntactic structures needed for apology (e.g., Im sorry) compared torequest.

    Detailed measurement reports of the three raters performance are presented in Table 4. Here, the logit values representrelative rater severity. The raters varied in the degree of severity. The higher the logit value, the more severe the rater. Basedon the logit values, Rater 1 was the most severe and Rater 2 was the least severe. The reliability of the measurement for thethree raters was 0.87, indicating that the different degrees of severity among the three raters were relatively reliable. Finally,

    Table 2Descriptive statistics for four tasks across prociency levels.

    Prociency N Mean SD Min Max Range

    All tasksHigh 12 2.48 0.20 2.17 2.77 0.60Mid 19 2.18 0.15 1.88 2.42 0.54Low 9 1.84 0.39 1.21 2.31 1.10

    Task 1High 12 2.43 0.27 1.83 2.83 1.00Mid 19 2.05 0.43 1.42 2.92 1.50Low 9 1.72 0.43 1.17 2.42 1.25

    Task 2High 12 2.56 0.20 2.08 2.83 0.75Mid 19 2.24 0.23 1.92 2.67 0.75Low 9 1.69 0.46 1.00 2.17 1.17

    Task 3High 12 2.65 0.24 2.08 2.92 0.83Mid 19 2.49 0.23 2.08 2.92 0.84Low 9 2.33 0.47 1.42 2.75 1.33

    Task 4High 12 2.29 0.49 1.25 2.91 1.67Mid 19 1.92 0.34 1.42 2.58 1.16Low 9 1.60 0.50 1.00 2.50 1.50

    Note. Scores: 1 inadequate, 2 able, 3 good.

  • S.J. Youn / System 42 (2014) 270287276no raters were identied as mistting since all int mean square values were within the range 0.75 and 1.3 (Bond & Fox,2007), which means no raters behaved erratically.

    4.3. Research question 1: syntactic complexity measures across prociency and pragmatic performance levels

    This section reports the three syntactic complexity measures on each of the four tasks across different levels of prociencyand pragmatic performance. Table 5 shows descriptive statistics and results from one-way ANOVA for mean length of T-unit(MLTU), mean length of clause (MLC), and mean number of clauses (independent and dependent clauses) per T-unit (CTU)across the three different prociency groups, based on their academic program levels and standardized prociency testscores. Statistically signicant differences were observed for MLTU, MLC, and CTU. Bonferroni post-hoc analyses showedsignicant differences between Mid-Low and High-Low groups for both MLTU and MLC, and High-Low group for CTU.

    Fig. 1. FACETS summary.

    Table 3Difculty logit values of assessment tasks.

    Task Difculty (logits) Error Int (mean square)

    1 0.30 0.08 1.02 0.03 0.08 0.83 0.94 0.09 1.04 0.67 0.08 1.1

    Notes. Person separation reliability 0.93; item separation reliability 0.98; separation index 7.51; xed (all same) chi-square 212.0; signicance 0.00.

    Table 4Severity logit values of three raters.

    Rater Severity (logits) Error Int (mean square)

    1 0.23 0.07 0.82 0.23 0.07 1.23 0.00 0.07 1.0

    Notes. Reliability 0.87; separation index 2.62; xed (all same) chi-square 23.6; signicance 0.00.

  • Regarding differences across the three prociency levels, the variation in MLTU was quite noticeable (1.14 words betweenHigh andMid, 2.12 words betweenMid and Low), indicating that MLTU distinguished between the three different prociencylevels well. On the other hand, a scarce difference in MLC between High andMid was found (0.01 words decrease), although alarger difference in MLC betweenMid and Low (1.45 words) was reported. For CTU, a larger difference between High and Mid(0.14 clauses) was reported than one between Mid and Low (0.09 clauses).

    Next, the degree of syntactic complexity was examined across different levels of pragmatic performance. Using theestimated ability logit values from the FACETS analysis, the learners were regrouped into three pragmatic performance levels,independent of their prociency levels. After sorting the learners by the ability logit values, the top 12 examinees werecategorized as the Prag_High group, the next 19 examinees as the Prag_Mid group, and the last nine examinees as thePrag_Low group. Examinees were sorted in this manner to match the number of participants in each prociency level groupwith the number in each pragmatic performance group. A moderate Spearmans rho correlation (r 0.61) between theoriginal prociency categories and pragmatic performance level categories was found indicating learners prociency did notnecessarily guarantee corresponding pragmatic performance. Analyzing this result more specically, seven learners (18%)ranked lower on their pragmatic performance than their prociency levels, which means they did not show correspondingpragmatic performance despite their established overall language prociency. Interestingly, at the same time, six learners

    S.J. Youn / System 42 (2014) 270287 277(15%) showed pragmatic performance that was superior to their prociency levels.Table 6 shows descriptive statistics and results from one-way ANOVA for MLTU, MLC, and CTU across the three different

    pragmatic performance groups. Statistically signicant differences were observed for MLTU, MLC, and CTU. Bonferroni post-hoc analyses showed signicant differences across all groups for MLTU, Mid-Low and High-Low groups for MLC, and High-Low group for CTU. Noticeable differences in MLTU for all three pragmatic performance levels were also reported (1.17words between High and Mid, 2.61 words between Mid and Low), and these differences were larger than those reported inthe differences in MLTU across the prociency levels. This nding indicates that MLTU was general enough to distinguishbetween the three different levels of prociency and pragmatics. In terms of phrasal elaboration, a greater difference in MLCwas found especially between High and Mid pragmatic performance (0.29 words) compared to the much smaller differencebetween High and Mid prociency levels (0.01 words). This result indicates MLC better distinguished between High and Midpragmatic performance, compared to High and Mid prociency levels. For CTU, in contrast to the small difference betweenMid and Low prociency levels (0.09 clauses), a larger difference betweenMid and Low pragmatic performance (0.22 clauses)was found, indicating CTU distinguished between Mid and Low pragmatic performance well.

    4.4. Research question 2: correlations between complexity measures across prociency and pragmatics

    The syntactic complexity measures were correlated with both prociency and pragmatic performance. Table 7 showsSpearmans rho correlations among the variables across all tasks combined. MLTU, MLC, and CTU showed statistically sig-nicant relationships across both prociency and pragmatics, but the relationships were not strong. This result indicates thatglobal complexity from mean length of T-unit (MLTU), phrasal-level complexity (MLC), and subordination at clause-level(CTU) were not strongly related to increases in both prociency and pragmatic performance, potentially suggesting non-linear relationships. More specically, the highest correlations were seen with MLTU with both prociency and pragmaticperformance (0.418 with prociency, 0.436 with pragmatics), compared to MLC (0.333 with prociency, 0.301 with prag-matics) and CTU (0.248 with prociency, 0.303 with pragmatics). This nding suggests that the increases in MLTU were moreclosely related to the different levels of prociency and pragmatic performance, compared to MLC and CTU. Additionally, thedifferent magnitudes of the relationships between each of the complexity measures and the two criterion measures conrmthat the three complexity measures are distinct sub-constructs of syntactic complexity. Lastly, except for MLC, the re-lationships between the two complexity measures (MLTU, CTU) and pragmatic performance were stronger than those withprociency, suggesting that the increases inMLTU and CTUweremore sensitive to the different pragmatic performance levelsrather than the prociency levels.

    Table 5Syntactic complexity measures across prociency levels.

    Prociency N Mean SD Min Max F p h2

    Mean length of T-unit (MLTU) High 12 11.75 5.23 6.75 21.00 15.762 0.001 0.17Mid 19 10.61 5.29 5.94 20.00Low 9 8.49 5.07 2.00 17.43All 40 10.57 4.91 2.00 21.00

    Mean length of clauses (MLCs) High 12 7.50 1.27 4.88 11.00 14.632 0.001 0.16Mid 19 7.51 1.49 5.00 12.33Low 9 6.06 1.43 2.00 9.30All 40 7.19 1.53 2.00 12.33

    Mean number of clauses per T-unit (CTU) High 12 1.61 0.33 1.00 2.50 3.258 0.041 0.04Mid 19 1.47 0.36 1.00 2.89Low 9 1.38 0.37 1.00 3.00All 40 1.50 0.36 1.00 3.00

  • Table 6Syntactic complexity measures across pragmatic performance levels.

    Pragmatic performance N Mean SD Min Max F p h2

    Mean length of T-unit (MLTU) Prag_High 12 11.86 5.23 7.50 21.00 21.98 0.001 0.22Prag_Mid 19 10.69 5.50 5.94 16.00Prag_Low 9 8.08 5.07 2.00 16.50All 40 10.57 4.26 2.00 21.00

    Mean length of clauses (MLCs) Prag_High 12 7.62 1.26 5.27 11.00 9.312 0.001 0.11Prag_Mid 19 7.33 1.57 4.88 12.33Prag_Low 9 6.29 1.43 2.00 8.50

    S.J. Youn / System 42 (2014) 2702872785. Discussion

    The study yielded the followingmajor ndings. Firstly, learnerswritten pragmatic performancewas not closely associatedwith their prociency levels, with a Spearmans rho correlation of 0.61, indicating some learners did not possess corre-sponding pragmatic competence according to their prociency levels. This nding contributes to understanding the rela-tionship between pragmatics and prociency both theoretically and empirically. Pragmatics has been theoreticallyconceptualized as part of L2 prociency in previous language competence models (e.g., Bachman & Palmer, 1996) and theinconclusive relationship between pragmatics and prociency has been reported in the previous studies (Matsumura, 2003;Roever, 2006; Taguchi, 2007a, 2011; Takahashi, 2005). The result of the present study suggests that although pragmatics andprociency are related to some extent, they are clearly distinct constructs. It is evident that L2 prociency alone does notguarantee equivalent written pragmatic performance in an EAP setting.

    Secondly, the three syntactic complexity measures functioned distinctly across prociency and pragmatics. Overall, MLTUdistinguished between the three different levels of prociency and pragmatic performance well. Yet, MLC (i.e., the phrasal-level complexity measure) tapped a more specic source of subclausal complexication particularly for different pragmaticperformance levels. The magnitudes of the three complexity measures differences between pragmatic performance levelswere more noticeable, compared to those between prociency levels. For example, the greater difference in the mean lengthof clause (MLC, 0.29 words increase) was found between High and Mid in pragmatic performance than the scarce difference(0.01 words decrease) between High andMid in prociency levels, especially compared to the relatively similar differences inMLTU between High and Mid in pragmatic performance (1.17 words) and High and Mid in prociency levels (1.14 words). Thenoticeable difference in MLC across different pragmatic performance levels supports the conclusion that pragmatically moreadvanced learners produced more words at the phrasal level, similar to the ndings reported in Bardovi-Harlig (1999). It ispossible that they utilized a greater variety of linguistic resources to express pragmatic functions at the phrasal level,including more modal verbs, past tense forms, or progressive aspect forms. Bardovi-Harlig argued that pragmatically moreadvanced learners use a more diverse tenseaspect system in relation to diverse pragmatic meaning. This often results inmorewords, as seen in two example sentences, I will take the course (5 words and 1 clause) vs. I am thinking of taking the course(7 words and 1 clause). The subclausal complexity in these particular examples is evident at the phrasal level and can bemeasured only via MLC which taps a more narrowly dened source of complexication, rather than the general length-basedcomplexity measure (MLTU). This nding further supports Norris and Ortegas (2009) argument of the importance ofmeasuring syntactic complexity multidimensionally with distinct sources of complexication.

    The mean number of clauses per T-unit (CTU) shows complexity via subordination. In this study, pragmatically procient

    All 40 7.19 1.53 2.00 12.33Mean number of clauses per T-unit (CTU) Prag_High 12 1.60 0.33 1.00 2.89 6.953 0.001 0.08

    Prag_Mid 19 1.51 0.36 1.00 2.55Prag_Low 9 1.29 0.37 1.00 3.00All 40 1.50 0.36 1.00 3.00learners also produced more clauses per T-unit, possibly due to more bi-clausal or conditional mitigations used to conveyvarious pragmatic meaning. Interestingly, the greater gap was found between Mid and Low in pragmatic performance in CTU(0.22 clauses) compared to the difference between Mid and High in pragmatic performance (0.09 clauses). This indicates thatlearners with intermediate pragmatic performance used more clauses in each utterance than those with low pragmaticperformance, but this was not the case for the increase from intermediate to high pragmatic performance. This ndingpotentially implicates distinct developmental rates of syntactic complexity for different pragmatic performance levels.

    Table 7Correlations between syntactic complexity and two criterion measures.

    Prociency Pragmatic performance

    MLTU 0.418* 0.436*MLC 0.333* 0.301*CTU 0.248* 0.303*

    Note. *p < 0.01.

  • understanding how pragmatics, grammar, and prociency are related. With an emphasis on pragmatic performanceconsidering both pragmalinguistics and sociopragmatics, the present study showed that pragmatics and prociency are

    S.J. Youn / System 42 (2014) 270287 279distinct constructs, although related to some extent. Pragmatically advanced learners utilized various syntactic features, asshown by the different degrees of the complexity measures across the pragmatic performance levels that differed from thoseacross prociency levels.

    Several areas remain for future research. Firstly, the present study employed three types of complexity measures: global,phrasal-level, and subordination complexity measures. More empirical researchwill be needed to further examine how otherforms of syntactic complexity are related to pragmatic performance. In addition to syntactic complexity, accuracy is anotherwidely used global measure to examine learners language production, but it was not examined in the study. To what extentaccuracy plays an important role in achieving various pragmatic functions is an empirical question that merits furtherattention. Secondly, different results might be found for learners spoken pragmatic production or tasks covering differentgenres or situational variables that inuence politeness, such as interlocutors power difference, social distance, and thedegree of imposition (Brown & Levinson, 1987). Furthermore, future research should explore a broader range of resourcesutilized in L2 learners pragmatic production to clarify the relationships among pragmatics, grammar, and prociency.Research into L2 pragmatics in interaction (Kasper, 2006) is one such example, as shown in an increasing body of research ona wide range of interactional resources utilized in spoken interaction using Conversation Analysis (e.g., Huth, 2006; Ishida,2009; Ochs, Schegloff, & Thompson, 1996; Ross & Kasper, 2013; Sacks, Schegloff, & Jefferson, 1974).

    The current studys ndings have the following pedagogical implications for language teachers in an EAP setting. Moreattention needs to be paid for explicit L2 pragmatic instruction regardless of learners prociency. The current study suggeststhat learners L2 prociency does not necessarily guarantee concomitant pragmatic performance. Additionally, despite theirestablished understanding of linguistic forms, learners are not necessarily able to use various syntactic features for pragmaticmeaning. Thus, teaching various aspects of grammar focusing on its form, meaning, and use is essential (Larsen-Freeman,2014). In addition to grammar, EAP pragmatics involves other issues. For example, students might not have institutionalknowledge of a university setting, such as the understanding of an appropriate timeline when requesting a recommendationletter, or might not be familiar with certain genres of writing, such as writing a cover letter to apply for a job. Therefore,Thirdly, the positive relationships between the three complexity measures and the two criterion measures (prociencyand pragmatic performance) were found. However, the relationships were not very strong indicating potentially non-linearrelationships among the variables. This explanation is supported by the unequal amount of average increases in the threesyntactic complexity measures across the three levels of the two criterion measures, as discussed above. Taken together, onecan speculate, based on these ndings, that each syntactic feature might have a different developmental pattern which is notnecessarily linear (e.g., Wolfe-Quintero et al., 1998). Further research on syntactic complexity, with rich research designs thatinclude longitudinal data as well as comparisons with L1 baseline groups, will be needed to conrm this speculation.

    Slightly different magnitudes of the relationships were found between each complexity measure and the two criterionmeasures. Specically, except for phrasal-level complexity (MLC), MLTU and CTU showed stronger relationships to pragmaticscompared to those with prociency, which further conrms that pragmatically advanced learners produced more words andclauses per T-unit. Consistent gaps shown in the correlation coefcients between the complexity measures and the twocriterion measures not only suggest that the three complexity measures indeed tapped distinct sources of syntactic com-plexication, but also the two criterion measures were somewhat independent.

    Lastly, the four written pragmatic assessment tasks varied in terms of difculty. The task of writing an e-mail refusing aprofessors request was the easiest task, which supports previous studies ndings that indirect refusals take less cognitivedemand due to more familiarity with the situation and routinized refusal expressions (e.g., Beebe, Takahashi, & Uliss-Weltz,1990; Taguchi, 2007b). The most difcult task was writing constructive comments on a cover letter written by a classmatepotentially due to learners unfamiliarity with the situation in both L1 and L2, which supports the previous research ndingthat familiarity of situations becomes a source of pragmatic task difculty (e.g., Taguchi, 2007a). Among the four criteria of therating category, the third criterion used for all tasks, appropriate use of formulaic linguistic expression, was the most difcult.Taken together, because L2 prociency does not automatically guarantee concurrent written pragmatic performance espe-cially when learners are not familiar with pragmatic situations, explicit pedagogical attention to various pragmatic situationsand linguistic expressions for appropriate pragmatic meaning is needed.

    6. Conclusion

    The previous research that examined the relationships among pragmatics, grammar, and prociency lacked in an explicitfocus on L2 learners pragmatic production elicited from authentic pragmatic tasks. Addressing such research gap, extensiveefforts in designing authentic pragmatic tasks with valid task-dependent rating criteria to measure learners pragmaticperformances were made in this study. As a result, as evidenced by the results from the MFRM analysis, the validity andreliability of measuring learners pragmatic performances were ensured, which enabled us to examine the relationship be-tween pragmatic performance and prociency systematically. Furthermore, in order to examine the relationship betweenpragmatics and grammar, three distinct complexitymeasures were employed to tap various aspects of syntactic complexity inlearners pragmatic production. Although the focus on syntactic complexity and the written pragmatic production elicitedfrom the particular EAP pragmatic tasks will limit the generalization of results, the current studys ndings contribute to

  • S.J. Youn / System 42 (2014) 270287280teaching various aspects of EAP pragmatics will be benecial for students. The authentic EAP pragmatic tasks and task-dependent rating criteria developed in this study can serve as useful teaching materials as well.

    Acknowledgments

    My sincere appreciation to Dr. John M. Norris and Dr. Lourdes Ortega for their critical feedback and guidance throughoutthe various stages of this study. This study was funded by the Graduate Student Organization, University of Hawaii at Manoa.Partial preliminary results were presented at the conference of the American Association for Applied Linguistics in Atlanta in2010.

    Appendix A. Pragmatic assessment tasks

    Task 1: write a recommendation letter request to a professor

    Situation: You found out there is a research award opportunity, and you are planning to apply for this award. To apply forthis award, you need a recommendation letter from an academic advisor.

    Task: Please read the information about the award below. Then, youwillwrite an e-mail to your academic advisor (ProfessorJack Brown, [email protected]) to request a recommendation letter.

    Time: You have 10 min to complete the task.Product: You will write a recommendation letter request e-mail to professor Jack Brown to apply for Arts & Sciences

    Student Research Awards.Information about the award:

    Task 2: write an e-mail to a potential employer to send your application packet

    Situation: You have been preparing to apply for an interpreter job at City Council. Your rsum and cover letters are readyto send, and all application documents should be sent by an e-mail.

    Task: You will send an e-mail to Human Resource Manager, Sarah Brown ([email protected]) to send your jobapplication documents including a cover letter and your rsum.

    Time: You have 5 min to complete the task.Product: In order to apply for an interpreter job, youwill write an e-mail to the Human ResourceManager, Sarah Brown, to

    send your application packet.

    Task 3: write an e-mail to refuse a professors request of helping with your classmates class project

    Situation: You received an e-mail from Professor Jack Brown (see the e-mail below). But, you have a very busy schedulethese days, so you cannot help your classmate. How would you reply to professors email?

    Task: Write a reply e-mail to your professor to refuse the request.Time: You have 5 min to complete the task.

  • Product: You will write an e-mail to Professor Jack Brown to refuse his request.

    Task 4: write constructive comments on a cover letter

    Situation: You want to apply for a job sometime soon, so you need to know how to write a cover letter.Task: Now, you have an example cover letter below that is written by your classmate for an internship job at City Council to

    Human Resource Director, Harry Johnson. Your task is to write constructive comments on the cover letter to nd out how thiscover letter can be improved. Think about important criteria and elements of writing a cover letter, and how you would have

    S.J. Youn / System 42 (2014) 270287 281done differently.Time: You have 15 min to complete the task.Product: You will write a comment on Jessies cover letter that you will give to Jessie.Example cover letter written by your classmate Jessie:

  • Appendix B. Task-dependent rating criteria

    1. Write an e-mail to request a recommendation letter.

    Tone of e-mail Clear message delivery/content knowledge Formulaic linguistic expression E-mail format

    3 (Good) Maintain a professional and politetone consistently throughout thee-mail (e.g., not to deprecate yourselffor lack of knowledge, not to hurry aprofessor, not to sound pushy,aggressive, and begging for helpdesperately, not to assume that aprofessor will write a letter for you)

    Include a clear/concise purpose and highlight amain point (e.g., provide an appropriate amountof background information, put a main point inits own line/paragraph rather than in the bottomor last of the message)

    Show evidence of knowledge of arecommendation letter (e.g., a degree ofimposition of asking for a letter to a professor)and award applications

    Use polite linguistic expressions for request(e.g., I was wondering if you can-, would itbe possible for you to-)

    Use linguistic expressions that can reduceimposition of request (e.g., if you have time,if possible, I know that youre extremely busy)

    Use good/acceptable grammar in general,good spelling

    Use a clear and informative subjectthat indicates a purpose of e-mail

    Use appropriate salutation andterm of address (e.g., Dear, Hello,Dr., Professor)

    Briey introduce yourself if aprofessor does not know you well.

    Use appropriate and courteousclosing

    2 (Able) Inconsistently maintain a professionaltone throughout the e-mail

    Sound more or less polite in general,but an informal tone is present

    Lack a clear/concise purpose of writing ane-mail (e.g., wordy/unnecessary explanationof why he/she is qualied for an award)

    Show evidence of inconsistent contentknowledge although the e-mail includes aclear purpose

    Use some and/or simple linguistic expressionsfor request and to reduce imposition, but theydo not sound polite enough or appropriate.

    Use some unconventional linguistic expressions Occasional grammar errors and misspelling

    Some elements of the e-mail formatare present, but they are not usedappropriately (e.g., use of sir to aprofessor, an unclear subject, Imwaiting for your reply as a closing)

    1 (Inadequate) The e-mail sound too casual andinformal

    Lack a professional tone throughoutthe e-mail (e.g., impose the importanceof receiving a good letter from a professor)

    The e-mail does not have a clear purpose. Show lack of knowledge of award applications(e.g., do not know an award applicationprocedure, do not recognize the imposition ofrecommendation letter request)

    Use inappropriate linguistic expressions forrequest, and sound very direct or imposing(e.g., I need a recommendation letter)

    Frequent grammatical errors and misspelling

    Either few elements of the e-mailformat are present, or none/fewof the elements are appropriatelyused

    S.J.Youn/System

    42(2014)

    270287

    282

  • 2. Write an e-mail to send an application packet.

    Tone of e-mail Clear message delivery/content knowledge Polite formulaic expression E-mail format

    3 (Good) Maintain a professional tone consistentlythroughout the e-mail (e.g., not to deprecateyourself for lack of knowledge, not to hurryan employer for a reply, not to sound pushy,aggressive, and begging for job desperately,not to assume that an employer will havean interview immediately)

    Include a clear/concise purpose and/orhighlight a main point (e.g., put a mainpoint in its own line/paragraph ratherthan in the bottom or last of the message)

    Show evidence of knowledge of writingan e-mail to apply for a job (e.g.,highlight important background information,show interest, specify job category, brieyintroduce yourself)

    Use polite/appropriate conventionallinguistic expressions for requestingto look at attachments (e.g., Pleasend the attached les) or statementof sending an application packet(e.g., Im sending you-)

    Use good/acceptable grammar ingeneral, good spelling

    Use a clear and informative subjectthat indicates a purpose of e-mail

    Use appropriate salutation and termof address (e.g., Dear, Mr., Ms.)

    Briey introduce yourself. Use appropriate and courteous closing

    2 (Able) Inconsistently maintain a professional tonethroughout the e-mail

    Sound more or less polite in general, but aninformal tone is present

    Lack a clear/concise purpose of writing ane-mail (e.g., wordy/unnecessary backgroundinformation, why he/she is qualied for a job)

    Show evidence of inconsistent contentknowledge although the e-mail includes aclear purpose

    Use some and/or simple linguisticexpressions for request and statement,but they do not sound polite enoughor appropriate.

    Use some unconventional linguisticexpressions (e.g., I will appreciatehearing from you)

    Occasional grammar errors andmisspelling

    Some elements in the e-mail formatare present, but they are not usedappropriately (e.g., absence of termof address, an unclear subject, use ofrst name Sarah, Im waiting foryour reply as a closing line)

    1 (Inadequate) Sound too casual and informal Lack a professional tone throughout thee-mail (e.g., impose to give a workopportunity)

    The e-mail does not have a clear purpose. Show lack of knowledge of job applicationprocess (e.g., ask directly for an immediate/prompt reply)

    Use inappropriate linguistic expressions,and sound very direct or imposing(e.g., if you choose me, you willnot regret)

    Frequent grammatical errors andmisspelling

    Either very few elements of the e-mailformat are present, or none of theelements are appropriately used

    S.J.Youn/System

    42(2014)

    270287

    283

  • 3. Write an e-mail to refuse a professors request.

    Tone of e-mail Clear message delivery/content knowledge Polite formulaic expression E-mail format

    3 (Good) Maintain a professional and politetone consistently throughout thee-mail (e.g., not to sound too apologetic)

    Include a clear/concise message(i.e., refusal to professors request)with an appropriate amount ofinformation

    Show evidence of knowledge of refusingprofessors request (e.g., recognize it is aface-threatening situation, provideexplanation, suggest alternative solutions)

    Use polite/appropriate conventionallinguistic expressions for apology(e.g., Im afraid that I cannot help,I dont think I can help) or suggestions(e.g., What about-, Is that okay if I -)

    Use good/acceptable grammar in general,good spelling

    Use appropriate salutation and term ofaddress (e.g., Dear, Professor, Dr., Jackcan be acceptable since a relationshipbetween a professor and a student is wellestablished)

    Use appropriate and courteous closing

    2 (Able) Inconsistently maintain a professionaland polite tone throughout the e-mail

    State an unclear purpose of writing ane-mail (e.g., intention of refusal is not clear)

    Provide unclear accounts Include wordy and unnecessary explanation

    Use some and/or simple linguisticexpressions for apology/suggestions,but they do not sound polite enoughor appropriate

    Use some unconventional linguisticexpressions

    Occasional grammar errors andmisspelling

    Some elements of the e-mail format arepresent, but they are not used appropriately

    1 (Inadequate) Sound too casual and informal Lack a professional and polite tonethroughout the e-mail

    The e-mail does not have a clear intentionof refusal.

    Show lack of knowledge of how toappropriately refuse

    Use very simple (or none) linguisticexpressions for apology that soundquite rude

    Frequent grammatical errors andmisspelling

    Either very few elements of the e-mailformat are present, or none of the elementsare appropriately used

    S.J.Youn/System

    42(2014)

    270287

    284

  • 4. Give comments/suggestions on classmates cover letter.

    Tone of giving comments/suggestions Clear message delivery Formulaic linguistic expression Knowledge of writing a cover letter

    3 (Good) Maintain a respectful and polite tonethroughout giving comments to a classmate(e.g., not to sound too opinionated, strong,reproachful, and pushy)

    Deliver comments and suggestions clearlywith an appropriate amount of explanation

    Use polite linguistic expressions forgiving suggestions and comments(e.g., I think you can-, It would be agood idea-, If you do-, You can considerto-, You could)

    Use good/acceptable grammar in general,good spelling

    Show knowledge of following elements: Basic format (e.g., term of address) Brief introduction Emphasize important selling pointsthat are relevant for the positionconcisely

    Do not assume to have an interviewautomatically, but ask for aninterview politely

    Keep a professional and formal tonethroughout the cover letter (e.g., notusing thanks, not to sound desperate)

    2 (Able) Inconsistently maintain a respectful andpolite tone throughout the e-mail

    Sound more or less polite in general, butan inappropriate tone is present

    Comments are more or less clear, but unclearsentences are present.

    Use some, although not frequent,and/or simple linguistic expressions thatsound rather strong for giving comments/suggestions. (e.g., you should-, you must-)

    Use some unconventional linguisticexpressions

    Occasional grammar errors and misspelling

    Comments show knowledge of someelements of a cover letter mentionedabove, but they are not explainedenough.

    1 (Inadequate) Sound too strong and directive Lack a respectful and polite tone throughoutthe e-mail

    Comments are not clear and not easilyunderstandable.

    Comments include no/few explanation.

    Use inappropriate linguistic expressions forgiving comments/suggestions frequentlythat sound quite rude and strong(e.g., You should)

    Frequent grammatical errors and misspelling

    Comments include very few elementsof a cover letter, or none of theelements are appropriately explained

    S.J.Youn/System

    42(2014)

    270287

    285

  • S.J. Youn / System 42 (2014) 270287286References

    Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.Bardovi-Harlig, K. (1999). Exploring the interlanguage of interlanguage pragmatics: a research agenda for acquisitional pragmatics. Language Learning, 49,

    677713.Bardovi-Harlig, K. (2000). Tense and aspect in second language acquisition: Form, meaning, and use. Oxford: Blackwell.Bardovi-Harlig, K. (2001). Empirical evidence of the need for instruction in pragmatics. In K. R. Rose, & G. Kasper (Eds.), Pragmatics in language teaching

    (pp. 1332). New York: Cambridge University Press.Bardovi-Harlig, K., & Drnyei, Z. (1998). Do language learners recognize pragmatic violations? Pragmatics vs. grammatical awareness in instructed L2

    learning. TESOL Quarterly, 32, 233259.Bardovi-Harlig, K., & Hartford, B. S. (1993). Learning the rules of academic talk: a longitudinal study of pragmatic development. Studies in Second Language

    Acquisition, 15, 279304.Beebe, L. M., Takahashi, T., & Uliss-Weltz, R. (1990). Pragmatic transfer in ESL refusals. In R. Scarcella, D. Andersen, & S. Krashen (Eds.), Developing

    communicative competence in a second language (pp. 5574). New York: Newbury House.Biber, D., Gray, B., & Poonpon, K. (2011). Should we use characteristics of conversation to measure grammatical complexity in L2 writing development?

    TESOL Quarterly, 45, 535.Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum Associates.Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge: Cambridge University Press.Chapelle, C. A., & Duff, P. A. (2003). Some guidelines for conducting quantitative and qualitative research in TESOL. TESOL Quarterly, 37, 157178.Fulcher, G., & Mrquez Reiter, R. (2003). Task difculty in speaking test. Language Testing, 20, 321344.Grabowski, K. C. (2009). Investigating the construct validity of a test designed to measure grammatical and pragmatic knowledge in the context of speaking.

    Unpublished Ph.D. dissertation. Columbia University.Housen, A., & Kuiken, F. (2009). Complexity, accuracy, and uency in second language acquisition. Applied Linguistics, 30, 461473.Hudson, T., Detmer, E., & Brown, J. D. (1992). A framework for testing cross-cultural pragmatics (Technical report #2). Honolulu, HI: University of Hawaii,

    Second Language Teaching and Curriculum Center.Hudson, T., Detmer, E., & Brown, J. D. (1995). Developing prototype measures of cross-cultural pragmatics (Technical report #7). Honolulu, HI: University of

    Hawaii, Second Language Teaching and Curriculum Center.Huth, T. (2006). Negotiating structure and culture: L2 learners realization of L2 compliment-response sequences in talk-in-interaction. Journal of Prag-

    matics, 38, 20252050.Ishida, M. (2009). Development of interactional competence: changes in the use of ne in L2 Japanese during study abroad. In H. T. Nguyen, & G. Kasper

    (Eds.), Talk-in-interaction: Multilingual perspective (pp. 351385). Honolulu, HI: National Foreign Language Resource Center, University of Hawaii.Kasper, G. (2006). Speech acts in interaction: towards discursive pragmatics. In K. Bardovi-Harlig, C. Flix-Brasdefer, & A. S. Omar (Eds.), Pragmatics and

    language learning (Vol. 11); (pp. 281314). Honolulu, HI: Second Language Teaching and Curriculum Center, University of Hawaii.Kasper, G. (2009). L2 pragmatic development. In W. C. Ritchie, & T. K. Bhatia (Eds.), New handbook of second language acquisition (pp. 259295). Leeds, UK:

    Emerald.Kasper, G., & Roever, C. (2005). Pragmatics in second language learning. In E. Hinkel (Ed.), Handbook of research in second language teaching and learning

    (pp. 317334). New York: Routledge.Kasper, G., & Rose, K. R. (2002). Pragmatic development in a second language. Malden: Blackwell Publishing.Kasper, G., & Schmidt, R. (1996). Developmental issues in interlanguage pragmatics. Studies in Second Language Acquisition, 18, 149169.Larsen-Freeman, D. (2006). The emergence of complexity, uency, and accuracy in the oral and written production of ve Chinese learners of English.

    Applied Linguistics, 27, 590619.Larsen-Freeman, D. (2014). Teaching grammar. In M. Celce-Murcia, D. M. Brinton, & M. A. Snow (Eds.), Teaching English as a second or foreign language

    (4th ed.) (pp. 256270). Boston, MA: National Geographic Learning.Leech, G. (1983). Principles of pragmatics. Harlow: Longman.Linacre, J. M. (1989). Many-faceted Rasch measurement. Chicago: MESA.Linacre, J. M. (2006). Facets Rasch measurement computer program (version 3.61.0) [computer software]. Chicago: Winsteps.com.Long, M. H., & Norris, J. M. (2000). Task-based teaching and assessment. In M. Byram (Ed.), Routledge encyclopedia of language teaching and learning (pp. 597

    603). London: Routledge.MacWhinney, B. (2000). The CHILDES project: Tools for analyzing talk (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.Matsumura, S. (2003). Modelling the relationships among interlanguage pragmatic development, L2 prociency, and exposure to L2. Applied Linguistics, 24,

    465491.McNamara, T. F. (1996). Measuring second language performance. New York: Addison Wesley Longman.Niezgoda, K., & Rver, C. (2001). Pragmatic and grammatical awareness: a function of learning environment? In K. R. Rose, & G. Kasper (Eds.), Pragmatics in

    language teaching (pp. 6379) New York: Cambridge University Press.Norris, J. M. (2009). Task-based teaching and testing. In M. H. Long, & C. J. Doughty (Eds.), Handbook of language teaching (pp. 578594). Cambridge:

    Blackwell.Norris, J. M., & Ortega, L. (2009). Towards an organic approach to investigating CAF in instructed SLA: the case of complexity. Applied Linguistics, 30, 555

    578.Ochs, E., Schegloff, E. A., & Thompson, S. A. (1996). Interaction and grammar. Cambridge: Cambridge University Press.Ortega, L. (2003). Syntactic complexity measures and their relationship to L2 prociency: a research synthesis of college-level L2 writing. Applied Linguistics,

    24, 492518.Ortega, L., Iwashita, N., Rabie, S., & Norris, J. M. (1999). Transcription and coding guidelines for a multilanguage comparison of measures of syntactic complexity.

    Honolulu: University of Hawaii, National Foreign Language Resource Center. Unpublished paper.Roever, C. (2006). Validation of a web-based test of ESL pragmalinguistics. Language Testing, 23, 229256.Ross, S., & Kasper, G. (Eds.). (2013). Assessing second language pragmatics. Basingstoke, UK: Palgrave Macmillan.Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language, 50, 696735.Schauer, G. (2006). Pragmatic awareness in ESL and EFL contexts: contrast and development. Language Learning, 56, 269318.Schmidt, R. (1983). Interaction, acculturation and the acquisition of communicative competence. In N. Wolfen, & E. Judd (Eds.), Sociolinguistics and second

    language acquisition (pp. 137174). Rowley, MA: Newbury House.Skehan, P. (1998). A cognitive approach to language learning. New York: Oxford University Press.Taguchi, N. (2007a). Task difculty in oral speech act production. Applied Linguistics, 28, 113135.Taguchi, N. (2007b). Development of speed and accuracy in pragmatic comprehension in English as a foreign language. TESOL Quarterly, 41, 313338.Taguchi, N. (2011). Do prociency and study-abroad experience affect speech act production? Analysis of appropriateness, accuracy, and uency. Inter-

    national Review of Applied Linguistics, 49, 265293.Takahashi, S. (1996). Pragmatic transferability. Studies in Second Language Acquisition, 18, 189223.Takahashi, S. (2001). The role of input enhancement in developing pragmatic competence. In K. R. Rose, & G. Kasper (Eds.), Pragmatics in language teaching

    (pp. 171199). New York: Cambridge University Press.Takahashi, S. (2005). Pragmalinguistic awareness: is it related to motivation and prociency? Applied Linguistics, 26, 90120.Thomas, J. (1983). Cross-cultural pragmatic failure. Applied Linguistics, 4, 91112.

  • Wolfe-Quintero, K., Inagaki, S., & Kim, H.-Y. (1998). Second language development in writing: Measures of uency, accuracy, and complexity. Honolulu, HI:University of Hawaii, Second Language Teaching and Curriculum Center.

    Youn, S. J. (2010). From needs analysis to assessment: Task-based L2 pragmatics in an English for academic purposes setting. Unpublished manuscript. Honolulu:University of Hawaii.

    S.J. Youn / System 42 (2014) 270287 287

    Measuring syntactic complexity in L2 pragmatic production: Investigating relationships among pragmatics, grammar, and profi ...1 Introduction1.1 Interlanguage pragmatics and grammar1.2 Interlanguage pragmatics and L2 proficiency1.3 Syntactic complexity measures

    2 Research questions3 Method3.1 Participants3.2 Pragmatic assessment tasks and task-dependent rating criteria3.3 Data analysis

    4 Results4.1 Descriptive statistics for the four pragmatic assessment tasks4.2 FACETS summary and measurement reports4.3 Research question 1: syntactic complexity measures across proficiency and pragmatic performance levels4.4 Research question 2: correlations between complexity measures across proficiency and pragmatics

    5 Discussion6 ConclusionAcknowledgmentsAppendix A Pragmatic assessment tasksTask 1: write a recommendation letter request to a professorTask 2: write an e-mail to a potential employer to send your application packetTask 3: write an e-mail to refuse a professor's request of helping with your classmate's class projectTask 4: write constructive comments on a cover letter

    Appendix B Task-dependent rating criteriaReferences