89
Assessment 2: Analysis Portfolio Section 1: Research Summary Quantitative Summary The effect of levels of processing on recognition of Chinese characters by non-native readers Introduction Recently, the number of non-native speakers learning Mandarin has soared (“Mandarin learning soars,” 2007). However, little research exists on how Chinese characters are remembered by non-native speakers. Substantial research does exist on how levels of processing influence memory. One important theory is that deep processing produces better recall and recognition than shallow processing (Craik & Lockhart, 1972). Processing orthographic features of words, e.g. font, is described as shallow. Semantic processing is regarded as deep. Craik and Tulving state that even simple semantic processing benefits more than extensive structural analysis (Craik & Tulving, 1975). Morris et al. (1977) showed that a relatively shallow processing task (deciding if words rhyme) was more effective than a semantic task, when rhyming retrieval was required. However, both Lockhart (2002) and Craik (2002) pointed out that the most effective combination for recognition in the above study was semantic coding with semantic retrieval. Craik & Tulving’s orthographic tasks are simple, e.g. deciding if words are printed in capital letters, whereas semantic analysis in the experiments is complex, for example, deciding whether words fit into sentences. Chinese characters provide opportunities for complex orthographic tasks due to their visual complexity (Schmidt, Pan & Tavassoli, 1994). By using Chinese characters we can compare complex orthographic tasks with semantically simple tasks and investigate whether simple semantic analysis really benefits more than complex visual analysis. Craik & Tulving (1975) show experiments using known English words to native speakers of English. Another difference between the present

Quantitative Report

Embed Size (px)

Citation preview

Page 1: Quantitative Report

Assessment 2: Analysis Portfolio

Section 1: Research Summary

Quantitative Summary

The effect of levels of processing on recognition of Chinese characters by non-native readers

Introduction

Recently, the number of non-native speakers learning Mandarin has soared (“Mandarin learning soars,” 2007). However, little research exists on how Chinese characters are remembered by non-native speakers.

Substantial research does exist on how levels of processing influence memory. One important theory is that deep processing produces better recall and recognition than shallow processing (Craik & Lockhart, 1972). Processing orthographic features of words, e.g. font, is described as shallow. Semantic processing is regarded as deep. Craik and Tulving state that even simple semantic processing benefits more than extensive structural analysis (Craik & Tulving, 1975).

Morris et al. (1977) showed that a relatively shallow processing task (deciding if words rhyme) was more effective than a semantic task, when rhyming retrieval was required. However, both Lockhart (2002) and Craik (2002) pointed out that the most effective combination for recognition in the above study was semantic coding with semantic retrieval.

Craik & Tulving’s orthographic tasks are simple, e.g. deciding if words are printed in capital letters, whereas semantic analysis in the experiments is complex, for example, deciding whether words fit into sentences. Chinese characters provide opportunities for complex orthographic tasks due to their visual complexity (Schmidt, Pan & Tavassoli, 1994). By using Chinese characters we can compare complex orthographic tasks with semantically simple tasks and investigate whether simple semantic analysis really benefits more than complex visual analysis.Craik & Tulving (1975) show experiments using known English words to native speakers of English. Another difference between the present study and that of Craik and Tulving is that we are presenting new vocabulary to non-native speakers of a language. In addition, Chinese may well be processed in different areas of the brain to English due to its structural differences (Schmidt, Pan & Tavassoli, 1994), so this experiment presents an opportunity to find out whether Craik & Tulving’s hypothesis is supported under different conditions.

The hypothesis for this experiment is that simple semantic processing will produce better recognition of Chinese characters than complex orthographic processing.

The null hypothesis is that there will be no difference in recognition between the two conditions.

Method

Design

Page 2: Quantitative Report

A repeated measures design was adopted. The independent variable was processing depth, with the orthographic task being shallow and the semantic task being deep. The dependent variable was the number of characters correctly recognised.

Participants

16 participants took part in the study. Participants were recruited from work colleagues and the UDo website. They completed tasks in the form of an internet survey. Overall figures for age and sex of participants was unknown as some completed the survey anonymously.

Materials

Ten Chinese characters were chosen based on complexity and meaning. The complexity of a Chinese character can be measured by number of pen-strokes required for writing (Schmidt, Pan & Tavassoli, 1994). Four-stroke characters were chosen as having an appropriate difficulty level. Characters with concrete meanings (e.g. fire, moon) were chosen so participants could find associated words easily. An online questionnaire was created. Example pages are in the appendix.

Procedure

In part one, participants were shown characters. They were asked either to describe them orthographically in as much detail as possible, or write up to five associations with the meaning of the character. Participants were not informed that they would be tested on their recognition of the characters until part two, when they were given meanings and asked to select the correct character from a choice of four. This is so that incidental learning was tested. Craik & Lockhart (1972) point out that under incidental learning conditions, the researcher has a control over the processing which he does not have if learning is intentional.

Results

The number of items correctly recognised was higher for the visual task than for the semantic, supporting the null hypothesis.

Mean number correct Standard DeviationVisual task 4.375 0.71880Semantic task 4.0 0.36515

In fact, there was a large effect size opposite to the direction expected (d=-0.99). Data was non-parametric (kurtosis for semantic questions = 7.5, outliers present). A Wilcoxon’s T-test showed that the effect in the opposite direction was not significant (Wilcoxon’s T(N=10)=14, p=0.145, two-tailed).

Conclusion

The study did not support the hypothesis that a simple semantic task produces better recognition than a complex orthographic task.

This may have been due to the recognition task, or to the experimental design. Recognition involved being given the (English) meaning and choosing the correct character from four options, so it was a

Page 3: Quantitative Report

visual task. Participants could also have been tested by being given characters and asked for the meaning. However, the experiment was limited to one DV.

There were several ways the study could have been more tightly controlled. Timing relied on asking the user to take no more than 15 minutes overall. Using timed questions would increase reliability, however the free version of the survey software did not offer this option.

Using multiple choice questions in the recognition phase was not ideal, as there was no way to control for the amount of guesswork participants had done. It would be possible to increase the validity of the test by adding a ‘don’t know’ option to the choices, and subtracting a percentage of the number of incorrect answers to account for guesswork.

The overall difficulty level of the task was too easy – some participants scored full marks in both categories. It would have been useful to introduce a delay between the processing questions and the memory test. The number of questions, and the difficulty of the characters, could have been increased.

Page 4: Quantitative Report

APPENDIX

(i)

Invitation to Participate

My name is Otto Condliffe and I am currently studying the University Certificate in Psychology at the University of Derby. I am investigating processing of visual and semantic information.

You will be supplied with my email address should you have any questions. A debrief sheet will be given to you upon completion of the questionnaires with further information about the study.

Participation in the following study is voluntary. You can stop participating at any time and your data will be withdrawn. The collected data is confidential and your personal information will not be shared with anyone. All personal information will be deleted before the data is analysed. 

The study should take 15 minutes.

If you have any questions or would like to be informed of the results of this study please contact me by email. 

My Name: Otto CondliffeEmail Address: [email protected]

(ii)

1)

Informed consent

Please enter your participant code here. This should be the first letters of your first and last names, followed by the last two digits of your year of birth.

Example:John Smith born 1972 = JS72

Your participant code will be used to identify your data anonymously should you wish to withdraw from the study.

By entering the participant code, you confirm that you agree to participate in

Page 5: Quantitative Report

the study, are 18 years old or over, and have read and understood the introduction to the study.

Page 6: Quantitative Report

(iii) Survey examples

Page 7: Quantitative Report
Page 8: Quantitative Report
Page 9: Quantitative Report
Page 10: Quantitative Report
Page 11: Quantitative Report
Page 12: Quantitative Report

(iv) raw data

Participant Score on semantically processed characters (/5)

Score on visually processed characters (/5)

1 4 52 4 53 4 34 4 55 4 46 4 47 5 38 4 59 4 510 4 411 4 412 4 513 3 514 4 415 4 516 4 4

Page 13: Quantitative Report

(v) SDSS outputs

EXAMINE VARIABLES=semantic visual

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Explore

Notes

Output Created 09-Dec-2012 12:33:13

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

16

Page 14: Quantitative Report

Missing Value Handling Definition of Missing User-defined missing values for

dependent variables are treated as

missing.

Cases Used Statistics are based on cases with no

missing values for any dependent

variable or factor used.

Syntax EXAMINE VARIABLES=semantic

visual

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Resources Processor Time 00 00:00:00.609

Elapsed Time 00 00:00:00.597

[DataSet0]

Page 15: Quantitative Report

Case Processing Summary

Cases

Valid Missing Total

N Percent N Percent N Percent

semantic 16 100.0% 0 .0% 16 100.0%

visual 16 100.0% 0 .0% 16 100.0%

Descriptives

Statistic Std. Error

semantic Mean 3.9375 .11063

95% Confidence Interval for

Mean

Lower Bound 3.7017

Upper Bound 4.1733

5% Trimmed Mean 3.9306

Median 4.0000

Variance .196

Std. Deviation .44253

Minimum 3.00

Maximum 5.00

Range 2.00

Interquartile Range .00

Skewness -.392 .564

Kurtosis 3.616 1.091

visual Mean 4.5000 .15811

Page 16: Quantitative Report

95% Confidence Interval for

Mean

Lower Bound 4.1630

Upper Bound 4.8370

5% Trimmed Mean 4.5556

Median 5.0000

Variance .400

Std. Deviation .63246

Minimum 3.00

Maximum 5.00

Range 2.00

Interquartile Range 1.00

Skewness -.904 .564

Kurtosis .027 1.091

semantic

semantic Stem-and-Leaf Plot

Frequency Stem & Leaf

2.00 Extremes (=<3)

.00 0 .

Page 17: Quantitative Report

13.00 0 . 4444444444444

1.00 Extremes (>=5)

Stem width: 10.00

Each leaf: 1 case(s)

Page 18: Quantitative Report

visual

visual Stem-and-Leaf Plot

Frequency Stem & Leaf

1.00 3 . 0

.00 3 .

6.00 4 . 000000

.00 4 .

9.00 5 . 000000000

Stem width: 1.00

Each leaf: 1 case(s)

Page 19: Quantitative Report

EXAMINE VARIABLES=semantic visual

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Page 20: Quantitative Report

Explore

Notes

Output Created 09-Dec-2012 12:33:48

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

16

Missing Value Handling Definition of Missing User-defined missing values for

dependent variables are treated as

missing.

Cases Used Statistics are based on cases with no

missing values for any dependent

variable or factor used.

Page 21: Quantitative Report

Syntax EXAMINE VARIABLES=semantic

visual

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Resources Processor Time 00 00:00:00.608

Elapsed Time 00 00:00:00.622

[DataSet0]

Case Processing Summary

Cases

Valid Missing Total

N Percent N Percent N Percent

semantic 16 100.0% 0 .0% 16 100.0%

visual 16 100.0% 0 .0% 16 100.0%

Page 22: Quantitative Report

Descriptives

Statistic Std. Error

semantic Mean 4.0000 .09129

95% Confidence Interval for

Mean

Lower Bound 3.8054

Upper Bound 4.1946

5% Trimmed Mean 4.0000

Median 4.0000

Variance .133

Std. Deviation .36515

Minimum 3.00

Maximum 5.00

Range 2.00

Interquartile Range .00

Skewness .000 .564

Kurtosis 7.500 1.091

visual Mean 4.3750 .17970

95% Confidence Interval for

Mean

Lower Bound 3.9920

Upper Bound 4.7580

5% Trimmed Mean 4.4167

Median 4.5000

Variance .517

Std. Deviation .71880

Minimum 3.00

Maximum 5.00

Page 23: Quantitative Report

Range 2.00

Interquartile Range 1.00

Skewness -.731 .564

Kurtosis -.541 1.091

semantic

semantic Stem-and-Leaf Plot

Frequency Stem & Leaf

1.00 Extremes (=<3)

.00 0 .

14.00 0 . 44444444444444

1.00 Extremes (>=5)

Stem width: 10.00

Each leaf: 1 case(s)

Page 24: Quantitative Report

visual

visual Stem-and-Leaf Plot

Frequency Stem & Leaf

Page 25: Quantitative Report

2.00 3 . 00

.00 3 .

6.00 4 . 000000

.00 4 .

8.00 5 . 00000000

Stem width: 1.00

Each leaf: 1 case(s)

Page 26: Quantitative Report

NPAR TESTS

/WILCOXON=visual WITH semantic (PAIRED)

/MISSING ANALYSIS.

NPar Tests

Notes

Output Created 09-Dec-2012 12:34:33

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

16

Missing Value Handling Definition of Missing User-defined missing values are

treated as missing.

Cases Used Statistics for each test are based on all

cases with valid data for the variable(s)

used in that test.

Page 27: Quantitative Report

Syntax NPAR TESTS

/WILCOXON=visual WITH semantic

(PAIRED)

/MISSING ANALYSIS.

Resources Processor Time 00 00:00:00.016

Elapsed Time 00 00:00:00.004

Number of Cases Alloweda 112347

a. Based on availability of workspace memory.

[DataSet0]

Wilcoxon Signed Ranks Test

Page 28: Quantitative Report

Ranks

N Mean Rank Sum of Ranks

semantic - visual Negative Ranks 8a 5.13 41.00

Positive Ranks 2b 7.00 14.00

Ties 6c

Total 16

a. semantic < visual

b. semantic > visual

c. semantic = visual

Test Statisticsb

semantic -

visual

Z -1.459a

Asymp. Sig. (2-tailed) .145

a. Based on positive ranks.

b. Wilcoxon Signed Ranks Test

Page 29: Quantitative Report

References

Mandarin learning soars outside China (2007). Retrieved from http://news.bbc.co.uk/1/hi/world/asia-pacific/6244763.stm 2nd December 2012

Craik, F.I.M. (2002). Levels of processing: Past, present... and future?Memory, 10 (5-6), 305-318 http://dx.doi.org/10.1080/09658210244000135

Craik, F.I.M. & Lockhart, R.S. (1972). Levels of Processing: A Framework for Memory Research. Journal of Verbal Learning and Verbal Behavior 11, 671-684. Retrieved from http://www.numyspace.co.uk/~unn_tsmc4/prac/labs/depth/craiklock.pdf 25th November 2012

Craik, F.I.M. & Tulving, E. (1975). Depth of Processing and the Retention of Wordsin Episodic Memory. Journal of Experimental Psychology: General 104 (3), 268-294 Retrieved from http://www-pmhs.stjohns.k12.fl.us/teachers/higginj/0CF7DB48-0118C716.0/Chapter18_Craik.pdf 1st December 2012

Lockhart, R.S. (2002). Levels of processing, transfer-appropriate processing,and the concept of robust encoding. Memory 10 (5-6), 397-403 http://dx.doi.org/10.1080/09658210244000225

Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of verbal learning and verbal behavior 16(5), 519-533.

Schmitt, B. H., Pan, Y., & Tavassoli, N. T. (1994). Language and Consumer Memory: The impact of Linguistic Differences between Chinese and English. Journal Of Consumer Research,21(3), 419-431. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=9501161805&site=ehost-live 8th December 2012

Page 30: Quantitative Report

Section 2: Analysis Exercises

Analysis Portfolio

Section 2: Analysis Exercises.

In this section you will be presented with 4 psychological studies. For each study you will be asked a series of questions designed to test your knowledge of research design, and where appropriate, your knowledge of data analysis and your ability to report and interpret the results of psychological studies. You should attempt each question.

For each study requiring data analysis you are required to conduct and then report the findings of an appropriate analysis of the data provided. You should screen the data prior to any analyses and routinely report appropriate estimates of effect size.

You should include any calculations and all SPSS outputs (data screening checks and statistical analyses) as appendices. Include the appendix (if appropriate) after each question.

Study 1

Researchers wished to test the effectiveness of a new technique for reducing hypertension. They tested the diastolic blood pressure, measured in millimetres of mercury (mmhg), of a group of people who suffered from hypertension before they took part in the therapy and again after the treatment. The table below shows the blood pressures of the patients.

Patient Before Treatment After Treatment

1 98 82

2 96 72

Page 31: Quantitative Report

3 140 90

4 120 108

5 130 72

6 125 80

7 110 98

Table 2. The diastolic blood pressures (mmhg) of patients before and after treatment for hypertension.

i. What type of research design is this study? 1

Repeated measures.

ii. What is the independent variable? 1

The IV is whether the patients have had the treatment or not.

iii. What are the levels of the independent variable? 1

‘Before treatment’ and ‘after treatment’

iv. What is the dependent variable? 1

Diastolic blood pressure

v. State the Null Hypothesis (H0) 4

Page 32: Quantitative Report

There will be no difference in blood pressure before and after treatment.

vi. State the Research or Experimental Hypothesis (H1) 4

Blood pressure will be lower after treatment than before.

vii. Create a Word table (or graph) of descriptive statistics for these data. 4

Mean Standard DeviationBefore Treatment 117 16.442After Treatment 86 13.466

ix. Conduct an appropriate inferential test of the null hypothesis.

Fully describe the details of the inferential test. 3

Data meets asumptions for parametric testing. Skewness, Kurtosis within +/-2.5 so distribution is normal, data is ratio or interval. No outliers. We can use a repeated-measures t-test.

What conclusion can you come to? 2

The analysis showed that diastolic blood pressure was significantly lower after treatment.

Give the statistical justification for this conclusion. 4

Page 33: Quantitative Report

t=4.206, df=6, p=0.003, one-tailed. d=1.89, so effect size is very large according to Cohen.

(25 Marks)

Page 34: Quantitative Report

Study 1 - Appendix

Yellow Descriptives

Green Parametric checks

Grey Inferential test

Your temporary usage period for IBM SPSS Statistics will expire in 11 days.

GET

FILE='C:\Users\Lily and Otto\Documents\Assignment Q1.sav'.

DATASET NAME DataSet1 WINDOW=FRONT.

EXAMINE VARIABLES=Before After

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Explore

Page 35: Quantitative Report

Notes

Output Created 06-Dec-2012 20:41:21

Comments

Input Data C:\Users\Lily and Otto\Documents\

Assignment Q1.sav

Active Dataset DataSet1

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

7

Missing Value Handling Definition of Missing User-defined missing values for

dependent variables are treated as

missing.

Cases Used Statistics are based on cases with no

missing values for any dependent

variable or factor used.

Syntax EXAMINE VARIABLES=Before After

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Resources Processor Time 00 00:00:02.013

Elapsed Time 00 00:00:01.982

Page 36: Quantitative Report

[DataSet1] C:\Users\Lily and Otto\Documents\Assignment Q1.sav

Case Processing Summary

Cases

Valid Missing Total

N Percent N Percent N Percent

Diastolic BP before

treatment

7 100.0% 0 .0% 7 100.0%

Diastolic BP after treatment 7 100.0% 0 .0% 7 100.0%

Descriptives

Statistic Std. Error

Diastolic BP before

treatment

Mean 117.00 6.214

95% Confidence Interval for

Mean

Lower Bound 101.79

Upper Bound 132.21

5% Trimmed Mean 116.89

Median 120.00

Variance 270.333

Std. Deviation 16.442

Minimum 96

Maximum 140

Range 44

Page 37: Quantitative Report

Interquartile Range 32

Skewness -.082 .794

Kurtosis -1.315 1.587

Diastolic BP after treatment Mean 86.00 5.090

95% Confidence Interval for

Mean

Lower Bound 73.55

Upper Bound 98.45

5% Trimmed Mean 85.56

Median 82.00

Variance 181.333

Std. Deviation 13.466

Minimum 72

Maximum 108

Range 36

Interquartile Range 26

Skewness .638 .794

Kurtosis -.665 1.587

Diastolic BP before treatment

Diastolic BP before treatment Stem-and-Leaf Plot

Page 38: Quantitative Report

Frequency Stem & Leaf

2.00 0 . 99

5.00 1 . 12234

Stem width: 100

Each leaf: 1 case(s)

Page 39: Quantitative Report

Diastolic BP after treatment

Diastolic BP after treatment Stem-and-Leaf Plot

Frequency Stem & Leaf

2.00 7 . 22

2.00 8 . 02

2.00 9 . 08

1.00 10 . 8

Stem width: 10

Each leaf: 1 case(s)

Page 40: Quantitative Report

T-TEST PAIRS=Before WITH After (PAIRED)

/CRITERIA=CI(.9500)

/MISSING=ANALYSIS.

T-Test

Page 41: Quantitative Report

Notes

Output Created 06-Dec-2012 20:43:22

Comments

Input Data C:\Users\Lily and Otto\Documents\

Assignment Q1.sav

Active Dataset DataSet1

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

7

Missing Value Handling Definition of Missing User defined missing values are

treated as missing.

Cases Used Statistics for each analysis are based

on the cases with no missing or out-of-

range data for any variable in the

analysis.

Syntax T-TEST PAIRS=Before WITH After

(PAIRED)

/CRITERIA=CI(.9500)

/MISSING=ANALYSIS.

Resources Processor Time 00 00:00:00.015

Elapsed Time 00 00:00:00.010

[DataSet1] C:\Users\Lily and Otto\Documents\Assignment Q1.sav

Page 42: Quantitative Report

Paired Samples Statistics

Mean N Std. Deviation Std. Error Mean

Pair 1 Diastolic BP before

treatment

117.00 7 16.442 6.214

Diastolic BP after treatment 86.00 7 13.466 5.090

Paired Samples Correlations

N Correlation Sig.

Pair 1 Diastolic BP before

treatment & Diastolic BP

after treatment

7 .161 .730

Paired Samples Test

Paired Differences

Mean Std. Deviation Std. Error Mean

Pair 1 Diastolic BP before

treatment - Diastolic BP after

treatment

31.000 19.502 7.371

Page 43: Quantitative Report

Paired Samples Test

Paired Differences

t

95% Confidence Interval of the

Difference

Lower Upper

Pair 1 Diastolic BP before

treatment - Diastolic BP after

treatment

12.964 49.036 4.206

Paired Samples Test

df Sig. (2-tailed)

Pair 1 Diastolic BP before

treatment - Diastolic BP after

treatment

6 .006

Z-Values (for outliers)

Before

BP Z98 -1.1555996 -1.27723

140 1.39887120 .18246130 .79067125 .48656110 -.42574

After

BP Z98 -.2970496 -1.03965

140 .29704

Page 44: Quantitative Report

120 1.63374130 -1.03965125 -.44557110 .89113

Effect size

mean1 – mean2

SD group 1(control)

117-86 = 1.89

16.442

Page 45: Quantitative Report

Study 2

Researchers compared two groups of 15 children on the time taken, in weeks, to learn how to ride a bicycle. The first group of children were shown a video of children cycling and then expected to learn without adult assistance. The second group were taken out by their parents who ran beside them and let go of the bicycle for increasingly longer periods until the child had learned. The researchers hypothesised that the second method would produce faster learning.

Video Parents

1 3

3 2

8 3

5 4

1 3

8 2

6 5

7 3

5 2

2 4

3 5

4 3

6 2

8 4

7 4

Table 3: The time taken in weeks by children learning to ride a bicycle when watching a video or being helped by their parents.

Page 46: Quantitative Report

i. What type of research design is this study? 1

Independent measures

ii. What is the independent variable? 1

Method of learning

iii. What are the levels of the independent variable? 1

Learning from video and learning with parents

iv. What is the dependent variable? 1

Time in weeks to learn to ride

v. State the Null Hypothesis (H0) 4

There will be no difference in time taken to learn to ride between the two groups.

vi. State the Research or Experimental Hypothesis (H1) 4

Time spent learning to ride will be less for the group learning with parents than for the group learning from the video.

vii. Create a Word table (or graph) of descriptive statistics for these data. 4

Mean Standard Deviation

Page 47: Quantitative Report

Video 4.93 2.492Parents 3.27 1.033

ix. Conduct an appropriate inferential test of the null hypothesis.

Fully describe the details of the inferential test. 3

Data meets assumptions for parametric testing: skewness, kurtosis within +/-2.5 so distribution is normal, data is ratio or interval, no outliers. We can use an independent measures t-test.

What conclusion can you come to? 2

The analysis showed that learning with parents was significantly faster than learning from a video only.

Give the statistical justification for this conclusion. 4

t=2.393, df=18.672, p=0.0135, one-tailed. d=-0.373, effect size is small-medium according to Cohen.

(25 Marks)

Page 48: Quantitative Report

Study 2 – Appendix

Yellow Descriptives

Green Parametric checks

Blue Comparison of variance for independent measures

Red Inferential test

Your temporary usage period for IBM SPSS Statistics will expire in 11 days.

GET

FILE='C:\Users\Lily and Otto\Documents\Assignment Q2.sav'.

DATASET NAME DataSet1 WINDOW=FRONT.

EXAMINE VARIABLES=Time BY Method

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Explore

Page 49: Quantitative Report

Notes

Output Created 06-Dec-2012 21:22:08

Comments

Input Data C:\Users\Lily and Otto\Documents\

Assignment Q2.sav

Active Dataset DataSet1

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

30

Missing Value Handling Definition of Missing User-defined missing values for

dependent variables are treated as

missing.

Cases Used Statistics are based on cases with no

missing values for any dependent

variable or factor used.

Syntax EXAMINE VARIABLES=Time BY

Method

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Resources Processor Time 00 00:00:01.794

Elapsed Time 00 00:00:01.161

Page 50: Quantitative Report

[DataSet1] C:\Users\Lily and Otto\Documents\Assignment Q2.sav

Method of learning to ride

Case Processing Summary

Method of learning to ride

Cases

Valid Missing

N Percent N

Time in weeks to learn Video only 15 100.0% 0

With parents 15 100.0% 0

Case Processing Summary

Method of learning to ride

Cases

Missing Total

Percent N Percent

Time in weeks to learn Video only .0% 15 100.0%

With parents .0% 15 100.0%

Page 51: Quantitative Report

Descriptives

Method of learning to ride Statistic

Time in weeks to learn Video only Mean 4.93

95% Confidence Interval for

Mean

Lower Bound 3.55

Upper Bound 6.31

5% Trimmed Mean 4.98

Median 5.00

Variance 6.210

Std. Deviation 2.492

Minimum 1

Maximum 8

Range 7

Interquartile Range 4

Skewness -.296

Kurtosis -1.245

With parents Mean 3.27

95% Confidence Interval for

Mean

Lower Bound 2.69

Upper Bound 3.84

5% Trimmed Mean 3.24

Median 3.00

Variance 1.067

Std. Deviation 1.033

Minimum 2

Page 52: Quantitative Report

Maximum 5

Range 3

Interquartile Range 2

Skewness .282

Kurtosis -.917

Descriptives

Method of learning to ride Std. Error

Time in weeks to learn Video only Mean .643

95% Confidence Interval for

Mean

Lower Bound

Upper Bound

5% Trimmed Mean

Median

Variance

Std. Deviation

Minimum

Maximum

Range

Interquartile Range

Skewness .580

Kurtosis 1.121

With parents Mean .267

95% Confidence Interval for

Mean

Lower Bound

Upper Bound

5% Trimmed Mean

Page 53: Quantitative Report

Median

Variance

Std. Deviation

Minimum

Maximum

Range

Interquartile Range

Skewness .580

Kurtosis 1.121

Time in weeks to learn

Stem-and-Leaf Plots

Time in weeks to learn Stem-and-Leaf Plot for

Method= Video only

Frequency Stem & Leaf

Page 54: Quantitative Report

2.00 0 . 11

3.00 0 . 233

3.00 0 . 455

4.00 0 . 6677

3.00 0 . 888

Stem width: 10

Each leaf: 1 case(s)

Time in weeks to learn Stem-and-Leaf Plot for

Method= With parents

Frequency Stem & Leaf

4.00 2 . 0000

5.00 3 . 00000

4.00 4 . 0000

2.00 5 . 00

Stem width: 1

Each leaf: 1 case(s)

Page 55: Quantitative Report

List of Z-scores

Group Score Z-Score

Video 1 -1.50709

Video 3 -.53477

Video 8 1.89601

Video 5 .43754

Video 1 -1.50709

Video 8 1.89601

Video 6 .92370

Video 7 1.40986

Page 56: Quantitative Report

Video 5 .43754

Video 2 -1.02093

Video 3 -.53477

Video 4 -.04862

Video 6 .92370

Video 8 1.89601

Video 7 1.40986

Parents 3 -.53477

Parents 2 -1.02093

Parents 3 -.53477

Parents 4 -.04862

Parents 3 -.53477

Parents 2 -1.02093

Parents 5 .43754

Parents 3 -.53477

Parents 2 -1.02093

Parents4 -.04862

Parents 5 .43754

Parents 3 -.53477

Parents 2 -1.02093

Parents 4 -.04862

Parents 4 -.04862

T-TEST GROUPS=Method(1 2)

/MISSING=ANALYSIS

/VARIABLES=Time

/CRITERIA=CI(.95).

Page 57: Quantitative Report

T-Test

Notes

Output Created 06-Dec-2012 21:31:12

Comments

Input Data C:\Users\Lily and Otto\Documents\

Assignment Q2.sav

Active Dataset DataSet1

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

30

Missing Value Handling Definition of Missing User defined missing values are

treated as missing.

Cases Used Statistics for each analysis are based

on the cases with no missing or out-of-

range data for any variable in the

analysis.

Page 58: Quantitative Report

Syntax T-TEST GROUPS=Method(1 2)

/MISSING=ANALYSIS

/VARIABLES=Time

/CRITERIA=CI(.95).

Resources Processor Time 00 00:00:00.016

Elapsed Time 00 00:00:00.008

[DataSet1] C:\Users\Lily and Otto\Documents\Assignment Q2.sav

Group Statistics

Method of learning to ride N Mean Std. Deviation

Time in weeks to learn Video only 15 4.93 2.492

With parents 15 3.27 1.033

Group Statistics

Method of learning to ride Std. Error Mean

Time in weeks to learn Video only .643

With parents .267

Page 59: Quantitative Report

Independent Samples Test

Levene's Test for Equality of

Variances

F Sig.

Time in weeks to learn Equal variances assumed 12.131 .002

Equal variances not

assumed

Independent Samples Test

t-test for Equality of Means

t df Sig. (2-tailed)

Time in weeks to learn Equal variances assumed 2.393 28 .024

Equal variances not

assumed

2.393 18.672 .027

Independent Samples Test

t-test for Equality of Means

Mean Difference

Std. Error

Difference

Time in weeks to learn Equal variances assumed 1.667 .696

Equal variances not

assumed

1.667 .696

Page 60: Quantitative Report

Independent Samples Test

t-test for Equality of Means

95% Confidence Interval of the

Difference

Lower Upper

Time in weeks to learn Equal variances assumed .240 3.093

Equal variances not

assumed

.207 3.126

Effect size:

mean2 – mean1

Pooled SD

Pooled SD=√(s12 + s2

2)/2=4.455

3.27-4.93

4.455

d=-0.373

Page 61: Quantitative Report

Study 3

Researchers were interested in the relationship between the proportion of smokers in a country in 1930 and the number of deaths (in males) per million from lung cancer in 1950. The researchers predict a positive relationship between the two variables. The raw data can be found in the table below.

Country Mean yearly cigarette consumption (1930)

Male death rate (per million) from lung cancer

(1950)

Iceland 240 60

Norway 250 90

Sweden 310 120

Denmark 370 160

Australia 450 160

Holland 450 240

Canada 500 150

Switzerland 530 250

Finland 1110 350

Great Britain 1130 460

United States 1280 190

Table 7: The mean yearly consumption of cigarettes in 1930 and the deaths from lung cancer in 1950 in males in 11 countries.

i. What type of research design is this study? 1

Page 62: Quantitative Report

Quasi-experimental

ii. Name the variables in this study. 2

Proportion of smokers in a country, 1930 Deaths in males (per million) from lung cancer, 1950

iii. State the Research Hypothesis (H1) for this study 4

The higher the proportion of smokers in 1930, the greater the number of

deaths per million from lung cancer in 1950

iv. Create a Word table of descriptive statistics for these data. 4

mean standard deviationyearly cigarette consumption 601.818 381.099deaths per million 202.727 117.481

v. Create a scatterplot to explore the relationship between the two variables. 6

Page 63: Quantitative Report

vi. Conduct an appropriate inferential test of the null hypothesis.

Fully describe the details of the inferential test. 3

Data meets assumptions for parametric testing: skewness and kurtos within +/-2.5, data is ratio. For correlational studies Pearson’s r should be used.

What conclusion can you come to? 2

There was a significant strong positive correlation between yearly cigarette consumption in 1930 and deaths from lung cancer in 1950.

Page 64: Quantitative Report

Give the statistical justification for this conclusion. 3

r=0.738, df=9, p=0.005, one-tailed

(25 Marks)

Page 65: Quantitative Report

Study 3 – Appendix

Yellow Descriptive statistics

Green Parametric checks

Red Pearson corerlation

EXAMINE VARIABLES=consumption deaths

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Explore

Notes

Output Created 04-Dec-2012 21:50:15

Comments

Input Active Dataset DataSet0

Filter <none>

Page 66: Quantitative Report

Weight <none>

Split File <none>

N of Rows in Working Data

File

11

Missing Value Handling Definition of Missing User-defined missing values for

dependent variables are treated as

missing.

Cases Used Statistics are based on cases with no

missing values for any dependent

variable or factor used.

Syntax EXAMINE VARIABLES=consumption

deaths

/PLOT BOXPLOT STEMLEAF

/COMPARE GROUPS

/STATISTICS DESCRIPTIVES

/CINTERVAL 95

/MISSING LISTWISE

/NOTOTAL.

Resources Processor Time 00 00:00:00.562

Elapsed Time 00 00:00:00.585

[DataSet0]

Page 67: Quantitative Report

Case Processing Summary

Cases

Valid Missing Total

N Percent N Percent N Percent

mean yearly cigarette

consumption 1930

11 100.0% 0 .0% 11 100.0%

death rate per million in

1950

11 100.0% 0 .0% 11 100.0%

Descriptives

Statistic

mean yearly cigarette

consumption 1930

Mean 601.8182

95% Confidence Interval for

Mean

Lower Bound 345.7925

Upper Bound 857.8439

5% Trimmed Mean 584.2424

Median 450.0000

Variance 145236.364

Std. Deviation 381.09889

Minimum 240.00

Maximum 1280.00

Range 1040.00

Interquartile Range 800.00

Skewness 1.002

Kurtosis -.691

Page 68: Quantitative Report

death rate per million in

1950

Mean 202.7273

95% Confidence Interval for

Mean

Lower Bound 123.8024

Upper Bound 281.6522

5% Trimmed Mean 196.3636

Median 160.0000

Variance 13801.818

Std. Deviation 117.48114

Minimum 60.00

Maximum 460.00

Range 400.00

Interquartile Range 130.00

Skewness 1.143

Kurtosis 1.123

Descriptives

Std. Error

mean yearly cigarette

consumption 1930

Mean 114.90564

95% Confidence Interval for

Mean

Lower Bound

Upper Bound

5% Trimmed Mean

Median

Variance

Std. Deviation

Minimum

Maximum

Page 69: Quantitative Report

Range

Interquartile Range

Skewness .661

Kurtosis 1.279

death rate per million in

1950

Mean 35.42190

95% Confidence Interval for

Mean

Lower Bound

Upper Bound

5% Trimmed Mean

Median

Variance

Std. Deviation

Minimum

Maximum

Range

Interquartile Range

Skewness .661

Kurtosis 1.279

mean yearly cigarette consumption 1930

mean yearly cigarette consumption 1930 Stem-and-Leaf Plot

Page 70: Quantitative Report

Frequency Stem & Leaf

6.00 0 . 223344

2.00 0 . 55

3.00 1 . 112

Stem width: 1000.00

Each leaf: 1 case(s)

Page 71: Quantitative Report

death rate per million in 1950

death rate per million in 1950 Stem-and-Leaf Plot

Frequency Stem & Leaf

2.00 0 . 69

5.00 1 . 25669

2.00 2 . 45

1.00 3 . 5

1.00 Extremes (>=460)

Stem width: 100.00

Each leaf: 1 case(s)

Page 72: Quantitative Report

GRAPH

/HISTOGRAM(NORMAL)=consumption.

Graph

Page 73: Quantitative Report

Notes

Output Created 04-Dec-2012 21:50:36

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

11

Syntax GRAPH

/HISTOGRAM(NORMAL)=consumptio

n.

Resources Processor Time 00 00:00:00.328

Elapsed Time 00 00:00:00.406

[DataSet0]

Page 74: Quantitative Report

GRAPH

/HISTOGRAM(NORMAL)=deaths.

Graph

Page 75: Quantitative Report

Notes

Output Created 04-Dec-2012 21:50:59

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

11

Syntax GRAPH

/HISTOGRAM(NORMAL)=deaths.

Resources Processor Time 00 00:00:00.265

Elapsed Time 00 00:00:00.380

[DataSet0]

Page 76: Quantitative Report

Z-Scores

Cigarette

consumption

Z-Score

240.0 -0.949

250.0 -0.923

310.0 -0.766

370.0 -0.608

450.0 -0.398

450.0 -0.398

500.0 -0.267

Page 77: Quantitative Report

530.0 -0.188

1110.0 1.333

1130.0 1.386

1280.0 1.780

Cigarette

consumption

Z-Score

240.0 -0.949

250.0 -0.923

310.0 -0.766

370.0 -0.608

450.0 -0.398

450.0 -0.398

500.0 -0.267

530.0 -0.188

1110.0 1.333

1130.0 1.386

1280.0 1.780

CORRELATIONS

/VARIABLES=consumption deaths

/PRINT=ONETAIL NOSIG

/MISSING=PAIRWISE.

Page 78: Quantitative Report

Correlations

Notes

Output Created 04-Dec-2012 22:03:07

Comments

Input Active Dataset DataSet0

Filter <none>

Weight <none>

Split File <none>

N of Rows in Working Data

File

11

Missing Value Handling Definition of Missing User-defined missing values are

treated as missing.

Cases Used Statistics for each pair of variables are

based on all the cases with valid data

for that pair.

Syntax CORRELATIONS

/VARIABLES=consumption deaths

/PRINT=ONETAIL NOSIG

/MISSING=PAIRWISE.

Resources Processor Time 00 00:00:00.046

Elapsed Time 00 00:00:00.065

Page 79: Quantitative Report

[DataSet0]

Correlations

mean yearly

cigarette

consumption

1930

death rate per

million in 1950

mean yearly cigarette

consumption 1930

Pearson Correlation 1 .738**

Sig. (1-tailed) .005

N 11 11

death rate per million in

1950

Pearson Correlation .738** 1

Sig. (1-tailed) .005

N 11 11

**. Correlation is significant at the 0.01 level (1-tailed).

Page 80: Quantitative Report

Study 4

A qualitative researcher is interested in capturing group discussions between men aged between 18-20 about ‘lad mags’ such as Loaded and Nuts. The researcher is particularly interested in how these magazines represent masculinity.

a. What qualitative data collection method would you recommend to the researcher? (1)

A focus group.

b. Why did you recommend the data collection method above? List two advantages of using this method (2)

Focus groups generate rich data

The setting is realistic, ecological validity is higher

c. What ethical issues do researchers face when using the method you have recommended?(4)

Although participants can be given pseudonyms, anonymity cannot be guaranteed as they may reveal identities after the focus group.

Participants may use focus groups as a means of confronting others.

Results may be biased by strong group members.

In groups, participants may give socially acceptable answers rather than saying how they really feel.

d. Give an example of a semi structured question that the researcher might ask                                                                                                                               (2) 

How would you describe a reader of your magazine?

e. How many people should be recruited for the study?                   (1)

6 people

f. Name one qualitative method the researcher might use to analyse the data.  (1)

Grounded theory

g. Describe the analytic method you named in more detail. Who developed this method? What is the aim of this analytic method? What does the analysis try and capture?                                                                                                                       (5)

Grounded theory involves identifying categories in data through coding. Through comparison of codes and data, theories emerge. Theories are checked against the data until,

Page 81: Quantitative Report

ideally, all the data has been described and new categories cannot be identified. Glaser and Strauss developed grounded theory. The aim is to use the data to generate theories rather than relying on pre-existing theories. Analysis tries to capture theories which are ‘grounded’ in the context they come from, and also the process by which theories emerge from initial codes generated from the data.

h. Describe 3 differences between qualitative and quantitative methods. (9)

Quantitative research states a hypothesis, such as ‘People tested in the same context they study in will score higher on a test than people tested in a different context,’ which can be tested experimentally. The hypothesis is subject to tests of statistical probablility such as t-tests. Qualitative research states a research question, such as ‘This research aims to explore students’ experiences of test-taking,’ which gives the researcher an area to explore and is not subject to inferential tests using statistical methods.

During a study, a qualitative research question can be revised in response to data, and what the participants feel is important, whereas a quantitative hypothesis shouldn’t change. For example, ‘This research aims to explore students’ experiences of test-taking,’ could be revised during the course of the study to focus on exam stress if that emerged as a predominant theme in the data. However, the hypothesis ‘People tested in the same context as they study in will score higher on a test than people tested in a different context,’ shouldn’t be changed even if the data suggest other areas to explore, or if the data seem to go against the hypothesis. If this is the case, another study should be conducted.

Qualitative research generates textual data, e.g. from semi-structured interviews, whereas quantitative research generates numerical data, e.g. from correlational studies. Even superficially textual data in quantitative research such as nominal data can be handled numerically. Qualitative data is explored using text-based methods such as grounded theory or thematic analysis.

(25 Marks)