76
55 Practical Research 2 (Q2) Ella Cañas Ansay

Practical Research 2 (Q2)

  • Upload
    others

  • View
    7

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Practical Research 2 (Q2)

55

Practical Research 2 (Q2)

Ella Cañas Ansay

Page 2: Practical Research 2 (Q2)

56

Table of Contents

Module 3: Learning from Others

Introduction 57

Learning Objectives 57 Lesson 1: Review of Related Literature 58

Lesson 2: Ethical Standards in Writing 68 Lesson 3: Citation and Referencing 70

Assessment 73 Summary 81

References 82

Module 4: Methodology

Introduction 83 Learning Objectives 83

Lesson 1: Hypothesis 84 Lesson 2: Data Collection 86

Lesson 3: Instrument Development 89 Lesson 4: Establishing Validity and Reliability 96

Lesson 5: Description of Sample 99

Lesson 6: Ethical Considerations 107 Lesson 7: Data Analysis 109

Assessment 115 Summary 127

References 129

Page 3: Practical Research 2 (Q2)

57

MODULE 3 LEARNING FROM OTHERS

Introduction

Now that we already discussed the introductory parts of your research paper,

obtaining narrative from others to support your study comes next. Some researchers used the

term “Review of Related Literature” for this section while others prefer to use the term “Literature

review” instead. Regardless of the label, this section is known to be derived from previous works

of several authors. This consequently requires critical reading skills and acknowledgment to the

authors you acquired sources from.

It is important to understand the role that a literature imparts with the research process

before getting started on sourcing it as this will place your own findings in context. After all, your

literature review will demonstrate how familiar you are with your topic.

In this module, we will give an extensive discussion on the review of related literature. This

includes the definition, ethics in standard writing, and citation and referencing.

Learning Outcomes

At the end of this module, the learners should be able to:

1. presents written review of related literature;

2. follow ethical standards in writing related literature; and

3. cite related literature using the APA style.

Page 4: Practical Research 2 (Q2)

58

Lesson 1. Review of Related Literature

Bordonaro (2010) defined the review of related literature as a way to relate the

explored area of a certain study along with the topic that the researcher is still investigating. Ridley

(2008) added that literature review includes theories and related research that is connected to

your paper. This is also where you place your argument on the subject of your research.

When doing the literature review, you may experience revising your research questions

for many times since literature review involves the process of defining the research question to

make your research more valid and reliable. Students writing literature review frequently wonder

how many articles they should cite and how long they should make the review. Often, standards

regarding this matter will vary, depending on the nature of the topic, the amount of literature on it,

or the instructions of your teacher.

We should establish two main goals for your literature review. First, attempt to provide a

comprehensive and up-to-date review of the topic. Second, try to demonstrate that you have a

thorough command of the field you are studying. The level of accuracy expected in a paper is

quite high (Galvan, 2002). This will require that you edit your writing to a level that far exceeds

what may be expected in a term paper.

In Dr. Villanueva’s (2018) lecture, she pointed out that a good literature review shows

where prior studies and research agree and disagree, plus where major questions that require

answers remains. It presents what is currently known studies in your chosen research topic and

gives direction to researcher in the future. A good literature review presents new techniques,

procedures, or even research design that is worth emulating to gain new useful insights and to

have a better focus in hypothesis.

Types of Review of Related Literature

Since literature reviews are pervasive throughout various academic disciplines, various

approaches can be adopted to effectively organize it. This means that the type of review that you

write will depend on your research topic. The University of Southern California (2015) created a

summarized list of the various types of literature review.

1. Self-study reviews

This type of review increases your reader’s confidence in an area that is rarely published.

Page 5: Practical Research 2 (Q2)

59

2. Context reviews

This type of review places your project in the big picture.

3. Historical Reviews

This type of review includes studies about first time issues, concepts, theories or even

phenomena that rose from literature while examining the changes throughout time. The intention

of this type of literature is to put research in the context of history so that scholars can observe

the changes and developments and how it can lead future research.

4. Theoretical Reviews

This type of review refers to the extent of how much is already explored in a particular

theory. This is also used to check the relationships among the present theories to determine

whether they can contribute to explain an arising question or not.

5. Methodological Reviews

We can simply say that this type of literature review focuses on the researcher’s method

(i.e., survey or experimentation) in conducting the study. Since some researcher’s approach does

not consider ethical issues that may affect the validity and the reliability of the result, hence it is

important to be extra careful before adapting the claims of others. Instead of accepting other

researchers’ assertions right away, it is more important to examine “how” those people came up

with that claim.

6. Integrative Reviews

This type of review combines the existing studies that may have the same hypothesis or

claims. From consolidating them altogether, new knowledge can be generated. A good integrative

review should be clear, accurate, and can be duplicated just like a primary research.

Purpose of Review of Related Literature

When one plans a research, he/she has to build the plan on positions provided by a theory

or observation from experience. One cannot do research without review of related literature since

research is a disciplined, systematic, controlled, empirical and critical study of relationship among

Page 6: Practical Research 2 (Q2)

60

some defined variables in natural phenomena (Ong Kian Koc, 1999). The following are the

importance of conducting review of related literature as enumerated by Villanueva (2018).

1. Limits the problem area. To apply enough and competent analysis, the problem should be

specific so that the treatment to be used is also specific. Consulting reliable research articles will

help you to improve your research more.

2, Avoid unnecessary repetition. If all the existing research uses all the same methods to carry

out their study, it does not mean that it is the only proper way to tackle the problem you are

focused to. Sometimes, it depends on your situation and variables—which should be considered

first before adapting other researcher’s methods.

3. Search for new approaches. Try to consider other researcher’s point of view before considering

the viewpoint of a single researcher especially if a particular field lacks research or studies.

4. Recommend suitable methods. Try to check on how other researchers utilize different methods

such as their research instruments and designs, sampling techniques, data collection, and data

interpretation. This is helpful so that the appropriate method for your problem can be applied.

5. Sample current opinions. Check and read newspapers, non-technical articles or even

magazines to acquire fresh ideas or problems that other researches have not touched on yet.

Conducting the Review of Related Literature

At this juncture, it is expected and presumed that you have already chosen your topic,

written your research questions, and stated the problem. You cannot proceed to select and review

your literature without them. But if you did, then you can proceed to the next steps listed by

Jerusalem, Del Rosario-Garcia, Palencia, & Calilung (2017).

1. Finding Information

▪ You should know “what” information to look for and “where” to find them.

Page 7: Practical Research 2 (Q2)

61

▪ The “what” information to look for must be answered by your topic, research objectives,

and statement of the problem.

▪ The obvious places to look for the information you need are the library and the internet.

2, Evaluating the Content

▪ Include only credible scholarly and academic articles or sources by evaluating its quality

and scholarliness. Use the following guidelines to help you in your evaluation presented

in Table 1.

Table 3.1. Guidelines for evaluating the sources.

Authority What are the author’s credentials?

Can you identify their institutional

affiliations?

What is the author’s experience on the

subject?

Currency When was the source published? Last 5

years?

Is it outdated?

Does it meet the time needs for your

topic?

Documentation Does the author cite credible, authoritative

sources?

Is evidence of scholarly research present?

Do they properly cite their sources?

Intended Audience Who is the intended audience? Scholars?

Researchers? General audiences?

Objective/Purpose What is the author’s goal in writing it? To

entertain? To inform? To influence? How

objective is the source?

Relevancy Is it relevant to your topic?

Does it provide any new information about

your topic?

(Eastern University, 2017)

Page 8: Practical Research 2 (Q2)

62

▪ If you are using internet in searching for related literature, you must be mindful that not all

materials on the internet are credible.

3. Writing the Review

▪ When writing the literature review, you should show relevant issues and findings from your

research articles.

▪ Make your presentation of studies organize and coherent by making a pattern such as

presenting details by chronological, known to unknown, categorical, or general-to-specific.

▪ The models in the research articles and book descriptions of the past research you read

should be followed as very close as possible.

▪ If reading other research or study, do not include all information but instead take down

only those necessary in helping you to rationalize gaps and variables, to define, to

contextualize, to explain the arguments you are claiming in your research paper.

▪ Arguments should be clear, logical, and facts based (Galvan, 2002).

Structure of a Review of Related Literature

In quantitative research, the literature review is organized by sections: introduction, critical

review, and summary (Ong Kian Koc, 1999).

Introduction: It states the purpose or scope of the review. The purpose may be a

preliminary review in order to state a problem or develop a proposal, or it may be an exhaustive

review in order to analyze and critique the research-based knowledge on the topic or to strengthen

the link between the variables included to the study.

Critical review: The essence of the review is the criticism of the literature. The researcher

must arrange the literature review as it related to the selection of the variables and the significance

of the problem. Merely summarizing one study after another does not make sense for an

informative literature review. Studies should be classified, compared, and contrasted in terms of

the way they contribute or fail to contribute to the knowledge on the topic, including criticisms of

designs, sample size used and methods to obtain such knowledge.

Summary: It presents the status of knowledge on the topic and identifies gaps in it. The

gaps in knowledge may be due to methodology difficulties in gathering data, inadequate sampling

Page 9: Practical Research 2 (Q2)

63

techniques, lack of studies related to the knowledge, or inconclusive statements from the results

of the study. The summary should also provide similarities and differences between the study to

be undertaken by the researcher and those conducted by other researchers.

Utilizing the Internet for Research

With the presence of internet nowadays, you can find useful information using it after

identifying your problem for research.

▪ Google Scholar is a very useful tool, providing a search of scholarly literature across many

disciplines and sources including theses, books, abstracts, and articles.

o ELSEVIER: www.sciencedirect.com. is a Dutch publishing and analytics company

specializing in scientific, technical, and medical content (Physical Sciences and

Engineering, Life Sciences, Social Sciences and Humanities).

o EBSCO: www.sciencedirect.com (Agriculture, Biology, Education, Engineering,

Environmental Studies, Film, Government, History, Kinesiology and Sports,

Literature, Medicine and Health, Music and Performing Arts, Political Science,

Psychology, Sociology

o Springer: www.sciencedirect.com (Agriculture, Biology, Education, Engineering,

Environmental Studies, Film, Government, History, Kinesiology and Sports)

▪ Educational Resources Information Center (ERIC) is an internet-based digital library of

education research and information. It provides access to bibliographic records of journal

and non-journal literature from 1966 to present.

Page 10: Practical Research 2 (Q2)

64

Example of Review of Related Literature

Below is an excerpt taken from the literature review of Thornbury (2020) entitled “The

Relationship between Instructor Course Participation, Student Participation, and Student

Performance in Online Courses”.

Online learning developed and continues to grow in popularity out of a need to make

learning more accessible to individuals in various phases of life and with varying personal

situations, but with a desire to continue their professional and academic development

(Fedynich, 2014). The National Center for Education Statistics reported that adult learners

(ages 25+) made up over half the part-time undergraduate enrollments at 4-year institutions

in 2016. The traditional classroom is often unappealing or not an option for the adult learner

population due to access limitations or obligations such as employment and family (Fedynich,

2014). The growth in the adult learner population has contributed to the ubiquity of online

instruction at institutions of higher education (Allen & Seaman, 2016). As the popularity and

acceptance of online learning continues to grow, institutions of higher education are looking

for ways to meet the changing needs and expectations of today’s learners (Johnson et al.,

2015). Competition among colleges and universities for students, reduced state funding, and

the need to do more with less are fueling additional changes and innovations in post-

secondary institutions (Macfadyen & Dawson, 2010). (INTRODUCTION)

This chapter includes a review of two foundational frameworks for online learning;

(1) three types of interaction developed by Moore (1989), and (2) the community of inquiry

theoretical framework established by Garrison and Akyol (2013). Greater attention will be

given to the instructor-learner interaction and teaching presence components of these

frameworks as they relate to instructor participation in the learning environment. The current

Page 11: Practical Research 2 (Q2)

65

This chapter includes a review of two foundational frameworks for online learning; (1) three

types of interaction developed by Moore (1989), and (2) the community of inquiry theoretical

framework established by Garrison and Akyol (2013). Greater attention will be given to the

instructor-learner interaction and teaching presence components of these frameworks as

they relate to instructor participation in the learning environment. The current literature on

participation in the online classroom will also be reviewed. The chapter will close with an

exploration of the current use of LMS data by researchers to answer questions related to the

learning experience – more specifically the instructor’s impact on learning in the virtual

environment.

Teaching and Learning Online (VARIABLES INVOLVED)

Online instruction developed out of the availability of new technologies that could

support remote access and communication and the need to educate a new kind of workforce

– a knowledge-based workforce (Bates, 2015). The format and methods of the early online

classroom would mimic those of the traditional face-to-face classroom; some even requiring

synchronous meetings (Pittman, 2013). In starting the experimental high school, Benton

Harbor, the University of Nebraska indicated that their goal was to work within their existing

instructional resources to provide training that met their standards of quality for graduation

(Moore & Kearsley, 2011). Although the basic instructional premises are the same, the

realities of the technology being used to deliver instruction at a distance necessitated new

theories and frameworks for teaching and learning (Moore, 1989).

Three Types of Interaction (VARIABLES INVOLVED)

As one of the first researchers to focus on interaction in courses taught at a

distance, Moore developed the theory of transactional distance for distance education

(Garrison & Cleveland-Innes, 2005; Moore, 1993). The term “transactional” stems from

Dewey’s (1938) theory of knowledge as transaction, which asserted that knowledge is

influenced by the environment as well as an individual’s perceptions of the experience

(Giossos, Koutsouba, Lionarakis, & Skavantzos, 2009).

Page 12: Practical Research 2 (Q2)

66

Moore (1993) defined transactional distance as a “pedagogical concept” (p. 22) pertaining to

the altered relationships between instructor and learner when separated by space and time

in a distance learning setting. The original transactional distance education theory had three

variables: dialog, structure, and learner autonomy (Moore, 1993).

Moore suggests that the terms dialog and interaction are synonymous. Later he

further delineated interaction into three types: learner-instructor, learner-content, and learner-

learner (Garrison & Cleveland-Innes, 2005; Moore, 1989). Moore’s (1989) types of

interaction spurred much research into interaction in distance education (Battalio, 2007;

Garrison & Cleveland-Innes, 2005; Kuo, Walker, Belland, & Schroder, 2013; Macfadyen &

Dawson, 2010).

Community of Inquiry

Garrison et al. (2000) elaborated on Moore’s transactional distance theory to

incorporate what they termed, educational presence. They argued that educational presence

“is more than social community and more than the magnitude of interaction among

participants” (Garrison & Cleveland-Innes, 2005, p. 134). Garrison et al. (2000) argued that

an effective educational experience is “embedded in a community of inquiry” (p. 88)

regardless of the mode of delivery, although it calls for special considerations in distance

learning. The community of inquiry theoretical framework has three elements: cognitive

presence, social presence, and teaching presence (Garrison & Akyol, 2013). These three

elements are further delineated into categories for research purposes. Cognitive presence

consists of triggering events, exploration, and integration (Garrison & Akyol, 2013). Social

presence includes emotional expression, open communication, and group cohesion

(Garrison & Akyol, 2013). Lastly, examples of teaching presence are categorized as course

design and organization, facilitation of discourse, or direct instruction (Anderson et al., 2001).

Teaching Presence (VARIABLES INVOLVED)

Anderson et al. (2001) defined teaching presence as “the design, facilitation, and

direction of cognitive and social processes for the purpose of realizing personally meaningful

Page 13: Practical Research 2 (Q2)

67

and educationally worthwhile outcomes” (p. 5). Specifically, teaching presence is the

selection and organization of course content, presentation of course content, “intellectual and

scholarly leadership” (Anderson et al., 2001), subject matter expertise, directing knowledge,

directing attention, confirming understanding, diagnosing misconceptions, and “encouraging

active discourse and knowledge construction” (Garrison et al., 2000, p. 93). Cognitive

presence and the social presence that supports it, are dependent on teaching presence

(Garrison & Akyol, 2013; Shea & Bidjerano, 2009).

Participation as Visible Evidence of Interaction, Teaching Presence, and Learning

The concepts of participation, interaction, and engagement often overlap and are

operationalized in a variety of ways in the literature (Beer et al., 2010; Henrie, Halverson, &

Graham, 2015; Hrastinski, 2008, 2009; Morris, Finnegan, & Wu, 2005; Ravenna, Foster, &

Bishop, 2012). Morris et al. (2005) defined participation as “student engagement in specific

learning activities” (p. 224) including page views, discussion posts read, and original

discussion postings. Henrie et al. (2015) operationalized engagement as frequency of logins,

number of postings, responses and hits, frequency of posts or views, participation, and time

spent online or a combination therein (p. 43), where participation is an observable indicator

of engagement.

LMS: Changing Learning

Just as online learning has become ubiquitous in higher education, so too has the

use of LMSs (Beer et al., 2010; Joksimović et al., 2015; You, 2016). Today’s LMSs help

universities and colleges meet the demand of a virtual student body (Macfadyen & Dawson,

2010), and provide the technologies necessary to facilitate social and constructivist learning

methodologies in the online classroom (Beer et al., 2010; Macfadyen & Dawson, 2010; Wei,

Peng, & Chou, 2015). However, as a result of the wide spread adoption of LMSs, the

development of learning experiences has become somewhat prescriptive because these

applications force course development into predefined molds around particular technologies

or LMS functionality (Beer et al., 2010). Beer et al. (2010) argued that LMSs are changing

teaching strategies and that the change is likely affecting how students engage in learning.

For example, in online learning environments students are often required to interact with

Page 14: Practical Research 2 (Q2)

68

Lesson 2: Ethical Standards in Writing

A general principle underlying ethical writing is the notion that the written work of an author

represents an implicit contract between the author and the readers (Jerusalem et al., 2017). This

means that the reader always assumes that the author is the sole originator of the written work

with or without credits. Thus, understanding the basic ethical norms for a scientific conduct is

important before writing a paper, specifically the literature review. There are three important and

relevant ethical issues to students who will be conducting research projects as follows:

1. Plagiarism

According to Neville (2017), plagiarism is a term used to describe a practice that involves

knowingly taking and using another person’s work and claiming it, directly or indirectly, as your

own.

Types of Plagiarism

or LMS functionality (Beer et al., 2010). Beer et al. (2010) argued that LMSs are changing

teaching strategies and that the change is likely affecting how students engage in learning.

For example, in online learning environments students are often required to interact with

content and other learners without any prompting from an instructor, a process which can

affect motivation and engagement.

Conclusions and Gaps in Current Research (SUMMARY)

Technology has evolved since the initial development of the theories and

frameworks of Moore (1989, 1993) and Garrison et al. (2000). However, the core principles

of their ideas, and the findings of research they have spurred to this day, persist. Current

research using available LMS activity log data has continued to show positive correlations

between student participation and academic achievement (You, 2016).

Page 15: Practical Research 2 (Q2)

69

a. Blatant Plagiarism. This is also known as intentional plagiarism. It happens when you

claim to be the sole author of a work written completely by someone else (Jackson State

Community College, n.d.). This includes the act of letting someone else write part of a

paper for you, making up bogus citations, turning in work done as a group that you

participated in as yours alone.

b. Technical Plagiarism. This is also known as unintentional plagiarism. It occurs when the

writer is not trying to cheat but fails to abide with the accepted methods of using and

revealing sources.

If you commit a blatant plagiarism, it might result to your failure in course or worst, you

might get dismissed from the school. Committing technical plagiarism might not drop you out of

school but it will your teacher may give you low grades on submitted paper.

Ideas of other people can be use as long as you properly cite them and do not claim

ownership on other researchers’ ideas. As a part of the academic field, you are expected to read,

to properly analyze and to respond to all the scholarly papers and ideas when writing your paper.

In short, you should cite properly to avoid accusations of plagiarism and it is your way of showing

respect to others works.

2. Language Use

Aside from plagiarism, another ethical consideration in writing is the use of language. A

writer must avoid racially charged, sexist, offensive language and tendencies. In other words, it

is an ethical responsibility of the writer to be sensitive to the sensibilities of his audience. Here are

some guidelines for language use in writing:

a. Avoid hasty generalizations about an ethic minority, any other category of people including

people’s sex and gender.

b. Use accurate and politically correct terminology when you are discussing about racial

groups.

c. Write first the subject then write the description after when discussing about people with

disabilities. Example: the man who is blind rather than the blind man.

3. Fraud

Page 16: Practical Research 2 (Q2)

70

Aside from plagiarism, another temptation that is ever present among students who are

conducting research is to fabricate data and results just to get over the coursework. This is done

for a variety of reasons, but foremost is due to the workload involved in gathering or collecting

data. Hence a researcher must observe the following to avoid fraud.

a. Honesty. Do not try to fabricate, misinterpret or falsify data. Do not lie to your colleagues,

to your research sponsor and to the public. Report all your results and data, methods and

procedure with honesty.

b. Objectivity. When creating your experimental design, conducting data analysis, or

interpreting data where being objective is required, avoid biases. Divulge personal interest

or financial interest that can affect your research.

c. Integrity. Show consistency and sincerity in action and thoughts because keeping

promises and agreements is not enough.

d. Carefulness. Be careful and critical when inspecting your work and your colleague`s

works. Keep track of your progress in research like in data collection, data analysis,

research design and transactions with journal agencies in a record book to avoid careless

mistakes and negligence.

LESSON 3: Citation and Referencing

It was emphasized in the previous lesson to cite your references to avoid committing

plagiarism. Several written works have various styles in doing this. In this course, however, the

guidelines for citation and preparing reference list should be consistent with the principles in the

Publication Manual of the American Psychological Association (APA) 7th edition.

You can use the following format summarized by Saint Mary’s College of California Library

(2020):

Table 3.2. APA guide for referencing

Sources In-text Citation Reference List

Book with one author (Gonzales, 2019) Author(s) last name, first and middle

initial. (Year of publication), Title of

the book (in italics). Publisher name.

Page 17: Practical Research 2 (Q2)

71

Book with two authors (Gonzales & Jones, 2019)

e.g. Gonzales, M. (2019). The

gendered Society. Oxford

University Press.

Book with 3 or more

authors

(Gonzales et al., 2019)

Journal Articles (Klimonske & Palmer,

1993)

Authors Last name. (Initial Year of

Publication). Article Title. Journal

Title in italics. Volume (issue), page

numbers. Digital object identifier

(doi)

e.g Klimonske, R., & Palmer, S.

(1993). The ADA and the hiring

process in organizations.

Consulting Psychology Journal:

Practice and Research, 45(2), 10-

36. https://doi.org/10.1037/1061-

4087.45.2.10

Websites (Sparks, 2018) Author. (Date). Title in italics.

Website. URL

e.g. Sparks, Dana. (2018,

September 12). Mayo mindfulness:

Practicing mindfulness exercises.

Mayo Clinic.

https://newsnetwork.mayoclinic.org.

Newspaper Article Online (Cieply, 2013) Article Author Last Name, First

Initial. (Year, Month, Date of

Publication). Article Title.

Newspaper in italics.

e.g. Cieply, M. (2013, November

11). Gun Violence in American

Page 18: Practical Research 2 (Q2)

72

movies is rising, study finds. New

York Times.

The list of references is not limited with the four sources mentioned in Table 3.2. For more

quick reference formatting guide, you can visit the APA website.

As a meticulous researcher, accurate citation is a part of the process. You should also

make sure that the content from the cited sources is accurately reported (Day & Gastel, 2016).

Aside from keeping you from committing plagiarism, careful citation helps keep you from

alienating those evaluating your paper.

Preparing the Reference List

According to Walden University (2020), a reference list is the list of publication information

for the sources that were cited in the literature. It intends to give the readers the information they

need in case that they wanted to find those sources. Some publications refer it as bibliography.

Below are some guidelines provided by Bordonaro (2010) in preparing the reference list.

▪ If you write an uncited resource or material, it shouldn`t reflect in your list of references.

▪ List references alphabetically by author’s surname.

▪ When writing your references, make sure that you will use a hanging indent for the second

line and the next line should be aligned on the hanging indented second line.

Example:

Gambles. I. (2009). Making the business case: Proposals that succeed for projects

that work. Farnham, England: Ashgate.

▪ Double-check the by cross-checking the reference list against the citations in the body of

the review.

Page 19: Practical Research 2 (Q2)

73

Assessment Tasks

TASK NO. 1 (WRITTEN WORK)

Instructions: Encircle the letter of the correct answer.

1. What is a good review of related literature?

a. It points out where prior studies agree, where they disagree, and

where major question remains.

b. It collects what is known up to a point in time.

c. It hardly indicates the direction for future research.

2. Supposed a researcher wanted to examine the evolution of computers

throughout the time. What type of review should he conduct?

a. Context reviews

b. Theoretical reviews

c. Historical reviews

3. The researcher’s literature review mentioned the adverse effect of taking

multivitamins as opposed to the belief of majority that multivitamins are

good for the health. What purpose of review of related literature has been

served?

a. Unnecessary repetition was avoided.

b. New approaches were discovered.

c. The problem area was limited.

4. What guideline should you look for if you want to evaluate the timeliness

of a journal article you found on the internet?

a. Currency

b. Intended audience

c. Purpose

Page 20: Practical Research 2 (Q2)

74

5. What is the correct order of the structure of a literature review?

a. Critical review ⇾ Introduction ⇾ Summary

b. Introduction ⇾ Critical Review ⇾ Summary

c. Summary ⇾ Introduction ⇾ Critical Review

6. What tool provides a search of scholarly literature across many disciplines and sources

including theses, books, abstracts, and articles?

a. Google Scholar

b. Elsevier

c. Educational Resources Information Center

7. What is a practice that involves claiming an idea/work of someone else?

a. Fraud

b. Ethical Standards in Writing

c. Plagiarism

8. What do you call an act that fabricates data and results just to get over the coursework?

a. Plagiarism

b. Fraud

c. Objectivity

9. What referencing style is required to be used in your research?

a. MLA

b. APA

c. Chicago

10. If the researcher avoided biased in experimental design, what attitude was exhibited?

a. Honesty

b. Objectivity

c. Carefulness

Page 21: Practical Research 2 (Q2)

75

TASK NO. 2 (WRITTEN WORK)

Instructions: Use the APA 7th edition style to create a reference list for the following

information. Write your answers on the box provided.

1. Book Title: Philippine History and Government Through the Years

Authors: Francisco M. Zulueta and Abriel M. Nebres

Published by National Bookstore in Mandaluyong City

Date: June 2002

2. Journal Title: Reclaiming Instructional Design

Journal Publication: Educational Technology

Volume: 36

Issue: 5

Pages: 5-7

Date: August 5, 1996

Author: Mary Merrill

3. Author: National Institute of Mental Health

Retrieval Date: May 2015

Title: Anxiety Disorders

Website URL: http://www.nimh.nih.gov/health/topics/anxiety-disorders/index.s

Page 22: Practical Research 2 (Q2)

76

TASK NO. 3

Instructions: Write the review of related literature of the topic assigned to you. Observe

proper in-text citation.

REVIEW OF RELATED LITERATURE

TASK NO. 3 (CONTINUATION)

Instructions: Write the review of related literature of the topic assigned to you. Observe

proper citation.

REVIEW OF RELATED LITERATURE

Page 23: Practical Research 2 (Q2)

77

Page 24: Practical Research 2 (Q2)

78

TASK NO. 3 (CONTINUATION)

Instructions: Write the review of related literature of the topic assigned to you. Observe

proper citation.

REVIEW OF RELATED LITERATURE

Page 25: Practical Research 2 (Q2)

79

TASK NO. 3 (CONTINUATION)

Instructions: Write the review of related literature of the topic assigned to you. Observe

proper citation.

REVIEW OF RELATED LITERATURE

Page 26: Practical Research 2 (Q2)

80

TASK NO. 4

Instructions: List the references you used in writing the review of related literature. Observe

proper guidelines.

REFERENCE LIST

Page 27: Practical Research 2 (Q2)

81

Summary

▪ A review of related literature is a place to make connections between what you are

investigating and what has already been investigated in your subject area.

▪ A good review points out areas where prior studies agree, where they disagree, and where

major question remain.

▪ There are six types of literature review: (1) Self-study reviews; (2) Context reviews; (3)

Historical reviews; (4) Theoretical reviews; (5) Methodological reviews; (6) Integrative reviews.

▪ To evaluate the content of a source, assess its authority, currency, documentation, intended

audience, objective, and relevancy.

▪ The review of related literature of a quantitative research is organized by sections—the

introduction, critical review, and summary.

▪ Google Scholar is a useful too to find relevant information about a research problem.

▪ Plagiarism is a term used to describe a practice that involves knowingly taking and using

another person’s work and claiming it.

▪ The two types of plagiarism are blatant and technical plagiarism.

▪ Blatant plagiarism is when someone claims to be the sole author of a work written by someone

else.

▪ Technical plagiarism happens when the writer failed to abide with the accepted methods of

writing.

▪ The guidelines for citation and preparing reference list should be consistent with the principles

in the Publication Manual of the American Psychological Association (APA) 7th edition.

Page 28: Practical Research 2 (Q2)

82

References

Bordonaro, K. (2010). How to Write a Literature Review: An Overview for International Students.

[PowerPoint Slide]. Brock University.

Day, R. & Gastel, B. (2016). How to Write and Publish a Scientific Paper. (8th ed.). California:

Greenwood.

Galvan, J. (2002). Writing Literature Review A Guide for Students of the Social and Behavioral

Sciences. (6th ed.). New York: Pyrczak Publishing.

Jackson State Community College. Intentional Plagiarism. Retrieved from:

https://www.jscc.edu/academics/programs/writing-center/plagiarism/intentional-

plagiarism.html.

Jerusalem, V., Del Rosario-Garcia M., Delos Reyes, A., Palencia, M., & Calilung, R. (2017).

Practical Research 2: Exploring Quantitative Research. (1st edition). Philippine Copyright.

Ong Kian Koc, B, (1999). EDSC 341 Research Seminar in Science Education. UP Open

University: Office of Academic Support and Instructional Services.

Ridley, D. (2008). The literature review: A step-by-step guide for students. London: Sage

Publications, p. 2.

Saint Mary’s College of California Library. (2020). APA Style 7th edition.

Thornbury, E. (2020). The Relationship Between Instructor Course Participation, Student

Participation, and Student Performance in Online Courses. [Doctoral dissertation,

University of Tennessee at Chattanooga]

University of Southern California. (2015, September 7). Organizing Your Social Sciences

Research Paper. Retrieved from USC Libraries:

https://libguides.usc.edu/writingguide/literaturereview

Villanueva, R. (2018). Session 7: What is Literature Review. [PowerPoint slides]. University of the

Philippines Los Baños.

Walden University. (2020). Q. What is a reference list? Retrieved from:

https://academicanswers.waldenu.edu/faq/72739

Page 29: Practical Research 2 (Q2)

83

MODULE 4

METHODOLOGY

Introduction

In the first section of your research paper, a little information about the methods to

be used should be stated. Reasons for choosing that specific method over others is discussed in

that chapter as well. This time, in the Methodology, all the procedures that will be used by the

researcher must be stated in complete details. This is the part where the researcher narrates the

necessary steps that he/she took to gather the data that need interpretation and analysis.

Research methodology, as defined by University of the Witwatersrand (2020), is the

specific procedures or techniques used to identify, select, process, and analyze information about

a topic. This section allows the reader to critically evaluate a study’s overall validity and reliability.

The main purpose of the methodology is to describe the experimental design so that another

researcher can repeat it if deemed necessary (Day & Gastel, 2016). The way these methods are

presented should be in chronological order.

In this module, we will discuss the methodology including the hypothesis, sampling

procedures, data collection, and data analysis.

Learning Outcomes

At the end of the lesson, the students shall be able to:

1. formulate research hypothesis;

2. plan data collection procedure;

3. construct an instrument and establish its validity and reliability;

4. describe sampling procedure and sample;

5. describe the ethical considerations in conducting methodology;

6. plan data analysis using statistics and hypothesis testing; and

7. present written research methodology

Page 30: Practical Research 2 (Q2)

84

Lesson 1. Hypothesis

The word hypothesis came from two Greek roots which mean that it is some sort of ‘sub-

statements’. This is often referred to as an ‘explanation’ of the facts someone has observed

(Singh, 2006). Simply put it, someone has a ‘theory’ about a particular thing. The word hypothesis

consists of two words:

Hypo + thesis = Hypothesis

‘Hypo’ means tentative of subject to the verification while ‘thesis’ means statement about

the solution of a problem (Singh, 2006).

Basically, a hypothesis is “an educated and testable guess about the answer to your

research question.” Making a prediction is one key feature of hypothesis formulation (DeMatteo

et.al, 2005). These predictions are then tested by gathering and analyzing data, to determine

whether they can be supported or rejected. In their simplest form, hypotheses are typically

phrased as “if-then” statements. For example, a researcher may hypothesize that “if someone

studied for four hours everyday, then his exam scores will be high.” This hypothesis makes a

prediction about the effects of studying on the exam scores, and this prediction can either be true

or false after the data gathered has been analyzed.

Since hypotheses propose a tentative solution to a problem, supposed one of your good

friends did not show up on your birthday celebration. A possible problem will be “What are the

factors that contribute to the absence of your friend on your birthday celebration?”

To solve this problem, you enumerated the possible explanations for the problem:

▪ Your friend is sick

▪ An emergency came up in the family

▪ There was an accident along the way

▪ Your friend decided to ditch the celebration

▪ Your friend was not able to buy you a gift

▪ Others

Page 31: Practical Research 2 (Q2)

85

Some of these explanations can be outright rejected. The guess that your friend ditched the

celebration can be rejected knowing that she is a good friend. Finding out which among the

guesses are false can help you narrow down the hypothesis.

Problem solving in research requires hypothesis to direct us on how to go about solving a

problem (Ong Kian Koc, 1999). It should be testable, brief, consistent and clear.

Kinds and Forms of Hypothesis

Hypotheses can be classified into several forms. These forms are determined by functions

that is why your hypothesis must be the best guess with respect to the available evidences. In

other cases, the type of statistical treatment generates a need for a particular form of hypothesis

(Singh, 2006). There are three forms of hypotheses that we will discuss in this lesson as follows:

1. Declarative Statement. A hypothesis can be written as a declarative that provides

relationship or difference between variables. The assumption of having a difference between

variables imply that the researcher had enough evidence to claim it.

Example: There is a significant effect of classroom size on the students’ behavior. (This

is merely a declaration of the independent variable effect on the dependent variable.)

2. Directional Hypothesis. A hypothesis is directional if it suggests an expected direction in

the relationship or difference between variables.

Example: Bigger classroom size results to a manageable set of students whereas

smaller classroom size results to loud students.

3. Non-directional Hypothesis. A null hypothesis is a statement of no difference or an assertion

of no relationship. This is a testable form of hypothesis called statistical hypothesis.

Example: There is no significant effect of classroom size on the students’ behavior.

Relationship between Hypotheses and Research Design

Hypothesis can be stated in different forms depending on the what type of research you

are conducting. If your study is about relationship between variables, then your hypothesis should

Page 32: Practical Research 2 (Q2)

86

be stated as a relationship between that two variables. In a correlational research for example,

you are trying to formulate a hypothesis if there is a relationship between decision-making ability

of an individual and alcohol intoxication. The case is different if you are using an experimental

design. The hypothesis should be if the intoxication of alcohol is the cause of poor decision-

making ability of an individual. We can say that the hypothesis testing depends on what kind or

type of research design you are going to use (De Matteo et al., 2005).

Lesson 2. Data Collection

Data

One thing that we should remember as a researcher is that we should not treat data as an

absolute reality but instead a manifestation of that reality only. To further understand this

statement, we can see what other people are doing and the behaviors they are showing, the

things that they are creating, and how this action affects their environment or other people, but

we will never know the actual people “inside” those individuals. Using the data, the researcher

collected, the researcher can seek the underlying truth on these events.

Research instruments are administered on the sample subjects for collecting evidences

or data. These tools must provide objective data for interpretation of results achieved in the study.

The data collection is the accumulation of specific evidence that will enable the researcher to

properly analyze the results of all activities by his research design and procedures. The main

purpose of data collection is to verify the research hypotheses.

When you undertake a research study, you need to collect the required information.

However, sometimes the information requires is already available and only need to be extracted.

Based upon these broad approaches to information gathering, data can be categorized as:

primary and secondary data.

Page 33: Practical Research 2 (Q2)

87

Figure 4.6 Methods of Data Collection

The information collected from determining the community’s health needs, assessing a

certain social program, determining employees job satisfaction of a certain organization, and

determining the service quality showed by the worker are examples of data or information

collected from primary sources. Examples of secondary sources are the following: using data

census to acquire data on the population`s age-sex structure, using the records of a hospital to

find if there is a pattern on the mortality and morbidity of a certain community, and obtaining data

from sources such as research articles, research/academic journals, magazines, and books.

Measurement of Data

As you go along solving your research problem, you will probably discover that you must

pin down your observations by measuring them in some way. In some cases, you will be able to

use one or more existing instruments—perhaps a published personality test to measure a

person’s tendency to be either shy or outgoing. In other situations, you may need to develop your

own measurement—perhaps a pencil-and-paper test to measure what students have learned

from a particular instructional unit.

Measurement instruments provide a basis on which the entire research effort rests. It is

limiting the data of any phenomenon so that those data may be interpreted and compared to a

particular standard. The variables that were discussed in the first module can undergo through

Page 34: Practical Research 2 (Q2)

88

the process of quantification to yield data and scores. In this case, the concept of measurement

is applied.

Measurement Scales / Types of Data

When we consider the statistical interpretation of data in later procedures, you may want

to refer to Table 4.1 in determining whether the type of measurement instrument you have used

will support the statistical operation you are contemplating.

Table 4.1 A summary of measurement scales

Measurement Scale Characteristics of the Scale Statistical Possibilities of the

Scale

Nominal Scale “A scale that “measures” only

in terms of names or

designations of discrete units

or categories”

“Enables one to determine the

mode, percentage values, or

chi-square”

Ordinal Scale “A scale that measures in

terms of such values as

“more” or “less, “larger” or

“smaller”, but without

specifying the size of the

intervals”

“Enables one also to

determine the median,

percentile rank, and rank

correlation”

Interval Scale “A scale that measures in

terms of equal intervals or

degrees of difference, but with

an arbitrarily established zero

point that does not represent

“nothing” of something”

“Enables one also to

determine the mean, standard

deviation, and product

moment correlation; allows

one to conduct most

inferential statistical analyses”

Ratio Scale “A scale that measures in

terms of equal intervals and

an absolute zero point”

“Enables one also to

determine the geometric

mean and the percentage

variation; allows one to

Page 35: Practical Research 2 (Q2)

89

conduct virtually any

inferential statistical analysis”

Lesson 3. Instrument Development

Research Instrument

Research instruments are the tools you use to collect data on the topic of interest to

transform it into useful information. There are several possible approaches to carry out your

research. Based on the research question, the researcher decides which type of instrument he

can use. These include survey, case study and experiment. The survey is concerned with

gathering data from a large number of people called respondents. The data gathered from these

respondents usually focus on the views, ideas, and attitudes of those respondents in relation to

the research topic. The case study draws on a specific environment such as school, and explores

the research topic in relation to that school. This may involve obtaining the views of the teachers,

children and parents. Experimental research is concerned with establishing the effect of some

action upon two groups or situations. All of these approaches to research draw upon a variety of

instruments. The range of research instruments used for each of these strategies are as follows”

(Hinds, 2001).

1. Questionnaires

The main instrument to collect data through survey is questionnaires. It is a set of

standard questions, often called items, which follow a fixed scheme in order to collect individual

data about one or more specific topics (Lavraska, 2008). Questionnaires seem so simple, but

one false step can lead to uninterpretable data or an abysmally low return rate (Leedy &

Ormrod, 2013).

Page 36: Practical Research 2 (Q2)

90

Type of Questions

The questions in a questionnaire will either be open or closed questions.

a. In an open question types of questionnaire, the respondents are allowed to comment

his or her suggestions or ideas about the question asked (See Figure 4.1).

b. In a closed question type of questionnaire, the respondents are required to answer the

question by picking one or more answer from the set of pre-defined choices (See

Figure 4.2)

When to use a questionnaire?

▪ Information is sought from a large number over a relatively large geographical

area.

▪ The information being sought is not complex.

▪ You are seeking information about facts, either in the present or in the recent

past.

▪ You want to study particular groups, or people in a particular problem area

because you want to generalize about them, make comparisons with other

groups or use their response and comparisons for development.

▪ You are certain that a questionnaire will produce the type of information you

need.

▪ You are certain that barriers such as language and literacy do not apply to your

target group.

Page 37: Practical Research 2 (Q2)

91

Figure 4. 1 Example of Open Question

Figure 4.2 Example of Close Question

Constructing Questionnaires

Leedy & Ormrod (2010) prepared a list of guidelines for developing questionnaires as follows:

1. “Keep it short.” Your questionnaire should be brief as possible. As a general rule of

thumb, a questionnaire should take no more than about twenty minutes to complete

(Wilkinson & Birmingham, 2003)

2. “Keep the respondent’s task simple and concrete.” Remember that you are the one

asking for people’s time. They are most likely to respond to a questionnaire if they

perceived it to be quick and easy to complete.

Page 38: Practical Research 2 (Q2)

92

3. “Provide straightforward, specific instructions.” Communicate exactly how you want the

respondents to respond. Do not assume that they are already familiar with Likert scales.

4. “Use simple, clear, unambiguous language.” When writing questions for your survey,

be exact on what you really want to find out. Avoid as much as possible obscure words or

technical jargons since some of your respondents may not understand the meaning of

your choice of words.

5. “Give a rationale for any items whose purpose may be unclear.” Give them a reason to

want to do the favor.

6. “Check for unwarranted assumptions implicit in your question.” Consider a very simple

question: “How many cigarettes do you smoke each day?” It seems to be a clear and

unambiguous question, especially if it is accompanied with certain choices so that all the

respondents have to do is to check one of them:

How many cigarettes do you smoke each day? Check one of the following:

__More than 25 __25-16 __15-11 __10-6 __5-1 __none

One underlying assumption in that question is that a person is likely to be

a smoker rather than nonsmoker, which is not necessarily the case. A second

assumption is that a person smokes the same number of cigarettes each day, but

for many smokers this assumption is not applicable. When the pressure is on for

some people, they may be chain smokers. But on weekends and holidays, they

may relax and smoke only one or two cigarettes a day or go without smoking at

all. This may confuse the respondents with that kind of smoking habits if they are

to answer the question using the scale previously mentioned. Had the author of

the question considered the assumptions on which the question was predicated,

he or she might first have questions such as these:

Do you smoke cigarettes?

___Yes

___No (If you mark “no”, skip the next two questions).

Page 39: Practical Research 2 (Q2)

93

Are your daily smoking habits reasonably consistent; that is, do you smoke

about the same number of cigarettes each day?

___Yes

___No (If you mark “no”, skip the next question)

7. “Word your question in ways that do not give clues about preferred or more desirable

responses.” This also means that you should not ask leading questions at all. Take

another question: “What strategies have you used to try to quit smoking?” By implying that

the respondent has, in fact, tried to quit, it may lead him or her to describe strategies that

have never been seriously tried at all.

8. “Determine in advance how you will code the responses.” Plan ahead on how you are

going to record the participant’s responses as a numerical data in order for you treat it with

statistical analysis. You can do this before and while writing your questionnaire.

9. “Check for consistency.” Some questions in your questionnaire form may touch

controversial and sensitive issues, and because of that, the respondent may give answers

that are acceptable and favorable in society rather than what the respondents really think

or perceive. To check for their consistency on their answer, try to ask the same question

in other part of the questionnaire but use different words or wording style.

10. “Conduct one or more pilot tests to determine the validity of your questionnaires.”

Before using newly constructed questionnaires, professional and experienced

researchers conduct a series of test runs to check the validity and reliability of the

questionnaires. This is done because researchers want the questions asked to be clear

and will give a valid and desired information.

11. “Scrutinize the almost-final product one more time to make sure it addresses your

needs.” Check the quality of your questionnaire by checking item by item again and again

to obtain results that are precise, objective, relevant, and probability of favorable reception

and return.

12. “Make the questionnaire attractive and professional looking.” Your final instrument

should have clean lines, crystal-clear printing (and clearly no typographical errors). It

should not be colorful. It should ultimately communicate that the author of it is a careful,

well-organized professional.

Page 40: Practical Research 2 (Q2)

94

2. Interviews

According to Kumar (2011), interviewing is a commonly used method of collecting

information from people. Interviews are not an easy option. They are often likened to a

conversation between two people, though it requires orchestrating, directing, and controlling to

varying degrees (Wilkinson & Birmingham, 2003).

Interviews are classified into different categories according to the degree of flexibility as shown in

Figure 4.3.

Figure 4.3 Types of Interview (Kumar, 2011)

▪ “Unstructured Interview is a very flexible approach. The areas of interest are established

by the researcher but the discussion should be guided by the interviewee. The researcher

has the utmost freedom in terms of content and structure. However, unstructured

interviews can be difficult to plan (in terms of the time to be given to the event). They are

difficult to steer if the discussion gets away from the key subject matter, and they can

prove extremely difficult to analyze.”

When to use interviews?

▪ “In-depth information is required”

▪ “Where the subject matter is potentially sensitive”

▪ “The issues under examination would benefit from development or

clarification”

Page 41: Practical Research 2 (Q2)

95

▪ “Structured Interview is where the researcher asks a predetermined set of questions, using

the same wording and order of questions as specified in the interview schedule. Some

see the structured interview as no more than a questionnaire that is competed face-to-

face. One of the main advantages of this interview is that it provides uniform information,

which assures the comparability of data. This also requires fewer interviewing skills than

does unstructured interviewing.”

3. Focus Groups

“Focus groups are formally organized, structured groups of individuals brought together

to discuss series of topics during a specific period of time. They are typically composed of several

participants (usually 6 to 10 individuals) and a trained moderator. Focus groups are also typically

made up of individuals who share a particular characteristics, demographic, or interest that is

relevant to the topic being studied. Overall, focus groups should attempt to cover no more than

two to three major topics and should last no more than 1 ½ to 2 hours.”

Regarding Scales of Measurement

There are other types of research tools which are used to collect the data. For example, the

observation technique is most frequently used to collect the data which yields the data at nominal

scale and also at interval scale. Table 4.2 provides a classification of instruments to use with in

relation to the scale of measurement.

When to use focus groups?

▪ “To gain information relating to how people think.”

▪ “To explain perceptions of an event, idea, or experience.”

▪ “When there is a desire for more understanding of the human

experience.”

▪ “When seeking the perspective of the client.”

Page 42: Practical Research 2 (Q2)

96

Table 4.2. A Classification of Scales of Measurement with Reference to the Traits

Trait Tool Scale of Measurement

1. Intelligence Psychological Tests Interval

2. Achievement Educational Tests Interval

3. Aptitude Psychological Tests Interval

4. Attitude Scales Ordinal

5. Interest Inventories Interval

6. Personality Inventories Interval

7. Adjustment Inventories Interval

8. Opinions or feelings Questionnaire Nominal

Lesson 4. Establishing Validity and Reliability

Regardless of the type of scale a measurement instrument involves, the instrument must

have both validity and reliability for its purpose. In your research report, you should provide

evidence that the instruments you use have a reasonable degree of validity and reliability.

However, validity and reliability take different forms, depending on the nature of the research

problem, the methodology being used to address the problem, and the nature of the data that are

collected.

The Concept of Validity

To examine the concept of validity, let us take a very simple example. Supposed that you

have designed a study to determine the health needs of a community. You decided to use

interview schedule for it. Most of the questions you constructed pertains to the attitude of the

community towards the health services being provided to them. However, note that your aim was

to determine the health needs of that community. Thus, the instrument you used is not measuring

what it was designed to measure.

Page 43: Practical Research 2 (Q2)

97

Certainly, no one would question the premise that a thermometer measures temperature,

but to what extent does an intelligence test actually measures a person’s intelligence? How

accurately do people’s annual incomes reflect their social class? Therefore, when we say validity,

it is the extent to which the instrument measures what it intends to measure. Conceptually, validity

seeks to answer the following question: “Does the instrument or measurement approach measure

what it is supposed to measure? Validity is determined by considering the relationship between

the test and some external, independent event.” The most common methods for demonstrating

validity are referred as follows:

1. “Face Validity. It is the extent to which, on the surface, an instrument looks like it is measuring

a particular characteristic. Face validity is often useful for ensuring the cooperation of people who

are participating in a research study. But because it relies entirely on subjective judgment, it is

not a dependable indicator that an instrument is truly measuring what the researcher wants to

measure. “

2. Content Validity. “It is the extent to which a measurement instrument is a representative sample

of the content being measured. The researcher defines the construct and the attempts to develop

item content that will accurately capture it. For example, an instrument designed to measure

anxiety should contain item content that reflects the construct of an anxiety.”

3. Criterion Validity, “It is determined by the relationship between the measure and performance

on an outside criterion or measure. The outside criterion must be related to the construct of

interest, and it can be measured at the same time the measure is given. If the measure is

compared to an outside criterion that is measured at the same time, it is then referred to as

concurrent validity. If the measure is compared to an outside criterion that will be measured in the

future, it is then referred to as predictive validity. For example, a personality designed to assess

a person’s shyness or outgoingness has criterion validity if its scores have a relationship with

other measures of a person’s general sociability.”

4. Construct Validity. “It is the extent to which an instrument measures a characteristic that cannot

be directly observed but is assumed to exist based on patterns in people’s behavior. Motivation,

creativity, racial prejudice, love—all of these are constructs, in that none of them can be directly

Page 44: Practical Research 2 (Q2)

98

observed and measured. When researchers ask questions, they should obtain some kind of

evidence that their approach does, in fact, measure the construct in question.”

The Concept of Reliability

We use the word ‘reliable’ very often in our lives. When we say that a person is reliable,

we infer that he is dependable, consistent, predictable, stable and honest. The concept of

reliability in relation to a research instrument has a similar meaning. If a research tool is said to

be consistent and stable, hence predictable and accurate, it is said to be reliable. Imagine that

you are concerned about your growing waistline and decide to go on a diet. Everyday, you put a

tape measure around your waist and pull the two ends together snugly to get a measurement.

But just how tight is “snug”? Quite possibly, the level of snugness might differ from one day to the

next. In fact, you might even measure your waist with different level of snugness from one minute

to the next. To the extent that you are not measuring your waist in a consistent fashion. Despite

the fact that you use the same tape everyday, you have a problem with reliability,

Therefore, reliability refers to the “consistency or dependability of a measurement

technique, and it is concerned with the consistency or stability of the score obtained from a

measure or assessment over time and across settings or conditions. If the measurement is

reliable, then there is less chance that the obtained score is due to random factors and

measurement error.”

Let us consider an example. In psychology, personality is a construct that is thought to be

relatively stable. If we were to assess a person’s personality traits using an objective,

standardized instrument, we would not expect the results to change significantly if we

administered the same instrument a week later. If the results did vary considerably, we might

wonder whether the instrument that we used was reliable. Reliability can be determined through

a variety of methods.

1. Interrater reliability. “It is the extent to which two or more individuals evaluating the same

product or performance give identical judgment. For example, assume you have two evaluators

assessing the acting-out behavior of a child. You measured “acting-out behavior” as the number

of times that the child refuses to do his or her schoolwork in class. The extent to which the

evaluators agree on whether or when the behavior occurs reflect this type of reliability.”

Page 45: Practical Research 2 (Q2)

99

2. Test-retest reliability. “This is a commonly used method for establishing the reliability of a

research tool. This is where an instrument is administered once, and then again, under the same

or similar conditions. For example, administering the same measure of academic achievement on

two separate occasions 6 months apart is an example of this type of reliability. The main

disadvantage of this method is that a respondent may recall the responses that s/he gave the first

round.”

3. Parallel form of same test. “It is the extent to which two different versions of the same instrument

that were administered at different times yield similar results. The two forms must cover identical

content and have a similar difficulty level. The two test scores are then correlated.”

4. Internal consistency reliability. “It is the extent to which all of the items within a single instrument

yield similar result. Even if you randomly select a few items or questions out of the total pool to

test the reliability of an instrument, each segment of questions thus constructed will reflect

reliability more or less to the same extent. It is based upon the logic that if each item or question

is an indicator of some aspect or phenomenon, each segment constructed will still reflect different

aspects of the phenomenon even though it is based upon fewer items. It is often measured with

Cronbach’s Alpha.

Validity and Reliability

Reliability is directly related to the validity of the measure. There are several important

principles. First, a test can be considered reliable, but not valid. Consider the National

Achievement Test (NAT) used as a predictor of success in college. It is a reliable test (high scores

relate to high GWA), though only a moderately valid indicator of success (due to lack of structured

environment—class attendance, parent-regulated study, and sleeping habits—each holistically

related to success).

Second, validity is more important than reliability. Using the above example, college

admissions may consider the NAT a reliable test, but not necessarily a valid measure of other

quantities colleges seek, such as leadership capability, altruism, and civic involvement. The

combination of these aspects, alongside the National Achievement Test (NAT), is a more valid

Page 46: Practical Research 2 (Q2)

100

measure of the applicant’s potential for graduation, later social involvement, and generosity

(alumni giving) toward the alma mater.

Finally, the most useful instrument is both valid and reliable. Proponents of the National

Achievement Test argue that it is both. It is a moderately reliable predictor of future success and

a moderately valid measure of a student’s knowledge in Mathematics, Critical Reading, and

Writing.

Lesson 5. Description of a Sample

The Concept of Sampling

Let us take a very simple example to explain the concept of sampling. Suppose you want

to estimate the average income of families living in city. There are two ways of doing this. The

first method is to contact all the families in your target city, find out their incomes, add them up,

and then divide this by the number of families. The second method is to select a few families from

the city, ask them their ages, add them up and then divide by the number of families. From this,

you can make an estimate of the average of the income of the families living in a city. Imagine the

amount of effort and resources required if you visit each and everyone’s household when you can

select a few and generalize from it.

Sampling, therefore, is the “process of obtaining a few (aka sample) from a bigger group

(aka population) to become the basis for estimating or predicting the prevalence of an unknown

piece of information regarding the bigger group.”

Sample

“A sample is a selection which is taken from a group; it is usually considered to be

representative of that group. As a result, the findings from the sample can be generalized

back to the group.”

Population

“A population is a group who shares the same characteristics. For example, a population

could be members of a club, nurses, students or children.”

Page 47: Practical Research 2 (Q2)

101

Randomization

Randomization is a “method of sampling in which each of the population has the equal

chance or probability of selection of the individuals for constituting a sample. The choice of one

individual is in no way tied with other. Randomization can be used when selecting the participants

for the study and for assigning those participants to various conditions within the study. These

two approaches are referred to as random selection and random assignment.”

a. Random Selection. “It is a process of selecting participants at random from a defined

population of interest. Random selection helps control for extraneous influences because

it minimized the impact of selection biases. In other words, using random selection would

ensure that the sample was representative of the population as a whole.”

Figure 4.4. A graphic example of random selection

b. Random assignment. “This is concerned with how participants are assigned to

experimental and control conditions within the research study. The basic principle of

random assignment is that all participants have an equal likelihood of being assigned to

any of the experimental or control groups.”

Page 48: Practical Research 2 (Q2)

102

Methods of Randomization

The following are main methods of randomization:

a. Lottery method of randomization. This is a simple technique wherein the researcher can

pick from the pool of population.

b. Tossing a coin. This is to randomly assign a decision that traditionally involves throwing

the coin in the air and see which side of the coin landed.

c. Throwing a dice. This involves throwing the dice in the air to see which side of it lands.

d. Blind folded method. This procedure requires that only the researcher be kept “blind” or

“naïve” regarding which treatment or control conditions that participants are in.

e. Double-blind technique. The most powerful method for controlling researcher

expectancy and related bias, this procedure requires that neither the participants nor the

researcher know which experimental or control condition research participants are

assigned to.

Sampling Design

Different sampling designs may be more or less appropriate in different situations. Figure 4.5

shows the type of sampling designs

Page 49: Practical Research 2 (Q2)

103

Figure 4.5. Types of Sampling Design

A. Probability Sampling

“In probability sampling, every part of the population has the potential to be represented

in the sample. The sample is chosen from the overall population by random selection—that is, it

is chosen in such a way that each member of the population has an equal chance of being

selected. When such a random sample is selected, the researcher can assume that the

characteristics of the sample approximate the characteristics of the total population.”

Random sampling can be selected using two different systems—sampling without

replacement and sampling with replacement. In sampling without replacement, each sample unit

of the population has only one chance to be selected in the sample. For example, the researcher

draws a simple random sample such that no unit can occur one or more times in the sample

(Lavraskas, 2008). If the unit can be chosen again at another draw, then it is sampling with

replacement.

1. Simple Random Sampling, “Simple random sampling is exactly the process just

described. Every member of the population has an equal chance of being selected. Such

approach is easy when the population is small and all of its members are known. To illustrate,

Page 50: Practical Research 2 (Q2)

104

supposed you want to sample a class. There are 80 students in the class, and so the first step is

to identify each student by a number from 1-80. Suppose you decide to select a sample of 20

using this technique. Use the lottery method or any randomization method to select the 20

students. These 20 students become the basis of your enquiry”

2. Stratified Random Sampling. “Think of grade 4, grade 5, and grade 6 in a public school.

This is a stratified population by which means that it has different groups called “strata” (singular:

stratum) that share distinct characteristics from each other. The population within a stratum must

be homogeneous. If we were to sample a population of fourth-, fifth-, and sixth-graders in

particular school, we would assume that the three strata are roughly equal in size (i.e., there are

similar numbers of children at each grade level), and so we would take equal samples from each

of the three grades.” The sampling procedure is shown in Figure 4.6.

Figure 4.5. Stratified Random Sampling Procedure

2.a. Proportionate Stratified Sampling. “In a simple stratified random sampling, all strata of the

population are essentially equal in size. But it is different in proportionate stratified sampling such

that the number of elements from each stratum in relation to its proportion in the total population

is selected. To illustrate, imagine a survey situation where a local researcher wanted to sample

Page 51: Practical Research 2 (Q2)

105

people in different religion in a community. There are 1,000 Jewish People, 2,000 Catholics, and

3,000 Protestants. In this situation, the researcher chooses his sample in accordance with the

proportions of each religious group. For every Jewish person, there should be 2 Catholics, and 3

Protestants.”

2b. Disproportionate Stratified Sampling. “In a disproportional stratified sample, the size of each

stratum is not proportional to its size in the population. The researcher may decide to sample half

of married people within female graduate students and one-third of married people within male

graduate students.”

3. Cluster Sampling. “Sometimes the population of interest is spread out over a large area. It may

not be feasible to make up a list of every person living within the area and, from the list, select a

sample for study through normal randomization procedures. Cluster sampling is based on the

ability of the researcher to divide the sampling population into groups (based upon visible or easily

identifiable characteristics), called clusters, and then select elements within each cluster, using

Simple Random Sampling technique. It is important that the clusters be as similar to one another

as possible, with each cluster containing an equally heterogeneous mix of individuals”

B. Non-probability Sampling

In non-probability sampling designs, the researcher has no way of predicting or

guaranteeing that each element of the population will be represented in the sample. Some

members of the population have little or no chance of being sampled. Non-probability sampling

designs are used when the number of elements in a population is either unknown or cannot be

individually identified.

1. Quota Sampling. “One consideration of quota sampling is the researcher’s ease of access to

the sample population. For example, suppose you are a reporter for a television station. At noon,

you positioned yourself with microphone and television camera in the middle of street of a

particular city. As people pass, you interview them. The fact that people in the two categories may

come in clusters of two, three, or four is not a problem. All you need are the opinions of 20 people

from each category. Quota sampling is the least expensive way of selecting a sample; you do not

need any information, such as sampling frame, the total number of elements, their location, or

Page 52: Practical Research 2 (Q2)

106

other information about the sampling population. It guarantees the inclusion of the type of the

people you need. However, the findings using this design cannot be generalized to the total

sampling population.”

2. Accidental Sampling or Convenience Sampling. “This sampling technique is also based upon

convenience in assessing the sampling population. It takes people or other units that are readily

available and you stop collecting data once you reach the required number of respondents you

decided to have in your sample. To illustrate, suppose you own a small restaurant and want to

sample the opinions of your patrons on the quality of food and service at your restaurant. You

open for breakfast at 6:00 am, and on five consecutive weekdays you question the first 40 patrons

who arrive. Customers who have on one occasion expressed an opinion are eliminated on

subsequent arrivals. The opinions you eventually obtained are from 36 mean and 4 women. It is

heavily in favor of men, perhaps because the people who arrive at 6:00 am are likely to be in

certain occupations that are predominantly male. The data from this convenience sample give

you the thoughts of robust, hardy men about your breakfast menu. Yet such information may be

all you need for your purpose.”

3. Judgmental or Purposive Sampling. “The primary consideration in purposive sampling is your

judgment as to who can provide the best information to achieve the objectives of your study. You

as a researcher only go to those people who in your opinion are likely to have the required

information and be willing to share it with you. Pollsters who forecast elections frequently use

purposive sampling. They may choose a combination of voting districts that, in past elections, has

been quite useful in predicting the final outcomes.”

4. Snowball Sampling, “It is the process of selecting a sample using networks. To start with, a few

individuals in a group are selected and the required information is collected from them. They are

then asked to identify other people in the group or organization, and the people selected by them

become part of the sample. This process is continued until the required number or a saturation

point has been reached, in terms of the information being sought.”

C. Systematic Sampling

“Systematic sampling has been classified as a mixed design because it has the characteristics of

both random and non-random sampling designs. Systematic sampling involves selecting

Page 53: Practical Research 2 (Q2)

107

individuals according to a predetermined sequence. The sequence must originate by chance. For

instance, we might create a randomly scrambled list of units that lie within the population of

interest and then select every 10th unit on the list.”

Sample Size Determination

Researchers often ask, How big of a sample should I select? A basic rule in sampling is: “The

larger the sample, the better.” But such a generalized rule is not very helpful to a researcher who

must make a practical decision about a specific research situation. Gay, Mills, and Airasian (2009)

have offered the following guidelines for selecting a sample size as restated by Leedy.

▪ “For smaller populations, say, 𝑁 = 100 or fewer, there is little point to sample. Just survey

the entire population”

▪ “If the population size is around 500, 50% should be sampled.”

▪ “If the population is around 1,500, 20% should be sampled.”

To some extent, the size of an adequate sample depends on how homogeneous or

heterogeneous the population is. If the population is markedly heterogeneous, a larger sample

will be necessary than if the population is fairly homogeneous. Statisticians have developed

formulas for determining the desired sample size for a given population. In determining the size

of your sample, the following should be considered:

▪ “At what level of confidence do you want to test your results, findings or

hypotheses?”

▪ “With what degree of accuracy do you wish to estimate the population

parameters?”

▪ “What is the estimated level of variation (standard deviation), with respect to the

main variable you are studying in the study population?”

Answering these questions is necessary regardless of whether you intend to determine the

sample size yourself or have an expert do it for you.

Page 54: Practical Research 2 (Q2)

108

Lesson 6. Ethical Considerations in Data Collection

All professions are guided by a code of ethics that has evolved over the years to accommodate

the changes happening over time in accordance with society’s needs and expectations. There

are certain behaviors in research—such as causing harm to individuals, breaching confidentiality,

using information improperly and introducing bias—that are considered unethical in any

profession.

Ethical Issues to consider concerning Research Participants

1. Seeking consent. “In every discipline it is considered unethical to collect information without

the knowledge of participants, and their expressed willingness and informed consent. Informed

consent implies that subjects are made adequately aware of the type of information you want from

them, why the information is being sought, what purpose it will put to, how they are expected to

participate in the study, and how it will directly affect them. It is important that the consent is

voluntary and without pressure of any kind.”

2. Providing incentives. “Some researchers provide incentives to participants for their participation

in a study, feeling this to be quite proper as participants are giving their time. Other think that the

offering of inducements is unethical. Most of the time, people do not participate in a study because

of incentives, but because they realize the importance of the study. Therefore, giving a small gift

after they participated, as a token of appreciation, is in the author’s opinion not unethical.

However, giving a present before collecting the data is unethical.”

3. Seeking sensitive information. “Certain types of information can be regarded as sensitive or

confidential by some people and thus an invasion of privacy. Asking for this information may upset

or embarrass a respondent. However, if you do not ask for the information, it may not be possible

to pursue your interest in the area and contribute to the existing body of knowledge. The dilemma

you face as a researcher is whether you should ask sensitive and intrusive questions.”

4. The possibility of causing harm to participants. “Harm includes not only hazardous medical

experiments but also any social research that might involve such things as discomfort, anxiety,

harassment, invasion of privacy, or demeaning or dehumanising procedures.”

Page 55: Practical Research 2 (Q2)

109

5. Maintaining Confidentiality. “Sharing information about a respondent with others for purposes

other than research is unethical. It is unethical to identify an individual respondent and the

information provided by him/her.”

Ethical issues to consider relating to the Researcher

1. Avoiding bias. “Bias on the part of the researcher is unethical. Bias is different from subjectivity

as bias is a deliberate attempt either to hide what you have found in your study or to highlight

something disproportionately to its true existence. It is the bias that is unethical and not the

subjectivity.”

2. Provision or deprivation of a treatment. “It is usually accepted that deprivation of a trial treatment

to a control group is not unethical as, in the absence of this, a study can never establish the

effectiveness of a treatment which may deprive many others of its possible benefits. This

deprivation of the possible benefits, on the other hand, is considered by some as unethical.”

3. Using inappropriate research methodology. “It is the researcher’s obligation to use appropriate

methodology. It is unethical to use a method inappropriate to prove or disprove something that

you want to, such as by selecting a highly-biased sample, using an invalid instrument or by

drawing wrong conclusions.”

4. Incorrect reporting. “To report the findings in a way that changes them to serve your own or

someone else’s interest is unethical.”

5. Inappropriate use of the information. “The use of information in a way that directly or indirectly

affects respondents adversely is unethical.”

Lesson 7. Data Analysis

All research requires logical reasoning. Quantitative researchers tend to rely more heavily

on deductive reasoning, beginning with certain premises (e.g., hypotheses, theories) and then

drawing logical conclusions from them. They also try to maintain objectivity in their data analysis,

conducting predetermined statistical procedures and using objective criteria to evaluate the

Page 56: Practical Research 2 (Q2)

110

outcomes od those procedures. By the time you reach this part, you should have your data neatly

collected and piled up waiting for analysis. The role of analysis is to bring data together in a

meaningful way and enable the researchers to interpret or make sense of it.

It is critical to remember that your methods of analysis must align with your chosen

research methodology (Bloomberg & Volve, 2018). In most type of research studies, the process

of data analysis involves following three steps: (1) preparing the data for analysis, (2) analyzing

the data, and (3) interpreting the data (i.e., testing the research hypotheses and drawing valid

references.

Since research data can be seen as the fruits of researchers’ labor, the data will serve as

a clue to answer the researchers’ questions given that the study has been conducted in a

scientifically rigorous manner. To unlock these clues, the researcher typically rely on a variety of

statistical procedures. These procedures allow researchers to describe groups of individuals and

events, examine the relationships between different variables, measure differences between

groups and conditions, and examine and generalize results obtained from a sample back to the

population from which the sample was drawn

There are two major areas of statistical procedures. The first one is called descriptive

statistics and the second one is called inferential statistics.

Descriptive Statistics

Descriptive statistics are used to describe the data collected in research studies and to accurately

characterize the variables under observation within a specific sample. It is frequently used to

summarize a study sample prior to analysing a study’s primary hypotheses. This provides

information about the overall representativeness of the sample, as well as the information

necessary for other researchers to replicate the study, if they so desire.

1. Central Tendency

The central tendency of a distribution is a number that represents the typical or most

representative value in the distribution. Measures of central tendency provide researchers with a

Page 57: Practical Research 2 (Q2)

111

way of characterizing a data set with a single value. The most widely used measures of central

tendency are mean, median, and mode.

▪ Mean

“It is commonly known as the average. The mean is quite simple to calculate. Simply add

all the numbers in the data set and then divide by the total number of entries. The result

is the mean of the distribution. For example, let us say that we are trying to describe the

mean age of a group of 10 study participants with the following ages:”

34 27 23 23 26 27 28 23 32 41

The summed ages for the 10 participants is 284. Therefore, the mean age of the sample

is 284/10 = 28.40.

The mean is quite accurate when the data set is normally distributed. Unfortunately, the

mean is strongly influenced by extreme values or outliers. Therefore, it may be misleading

in data sets in which the values are not normally distributed, or where there are extreme

values at one end of the data set.

▪ Median

“It is the middle value in a distribution of values. To calculate the median, simply sort all of

the values from lowest to highest and then identify the middle value. The middle value is

the median. For example, sorting the set of ages in the previous example would result in

the following:”

23 23 23 26 27 27 28 32 34 41

In this instance, the median is 27. If the two values were different, you would simply get

the average of the two values.

▪ Mode

“It is the value that occurs most frequently in a set of values. To find the mode, simply

count the number of times (frequency) that each value appears in a data set. The value

Page 58: Practical Research 2 (Q2)

112

that occurs most frequently is the mode. For example, we could easily see from the

previous example that the most prevalent age in the sample is 23, which is therefore the

mode.”

23 23 23 26 27 27 28 32 34 41

2. Dispersion

“Measures of central tendency, like the mean, describe the most likely value, but they do not tell

is anything about how the values vary. For example, two sets of data can have the same mean,

but they may vary greatly in the way that their values are spread out. Another way of describing

the shape of a distribution is to examine this spread. The spread, more technically referred to as

the dispersion. The most widely used measures of dispersion are range, variance, and standard

deviation.”

▪ Range

“The range of a distribution tells us the smallest possible interval in which all the data in a

certain sample will fall. Quite simply, it is the difference between the highest and lowest

values in a distribution. Using our previous example, the range of ages for the study

sample would be:”

41-23= 18

▪ Variance

“Variance gives us a sense of how closely concentrated a set of values is around its

average value, and is calculated in the following manner:

1. Subtract the mean of the distribution from each of the values.

2. Square each result.

3. Add all the squared results.

4. Divide the result by the number of values minus 1.”

The variance of a distribution gives us an average of how far, in squared units, the values

in a distribution are from the mean, which allows us to see how closely concentrated the scores

in a distribution are.

Page 59: Practical Research 2 (Q2)

113

▪ Standard Deviation

“Basically, the standard deviation is the square root of the variance. The variance and the

standard deviation of distributions are the basis for calculating many other statistics that

estimate the association and differences between variables.”

3. Measures of Association

“In addition to describing the shape of variable distributions, another important task of descriptive

statistics is to examine and describe the relationships of associations between variables.

Correlations are perhaps the most basic and most useful measure of association between two or

more variables. Expressed in a single number called a correlation coefficient (r), correlations

provide information about the direction of the relationship and the intensity of the relationship.

Furthermore, tests of correlations will provide information on whether the correlation is statistically

significant. There is a wide variety of correlations that, for the most part, are determined by the

type of data being analyzed. One of the most commonly used correlations is the Pearson product-

moment correlation, often referred to as the Pearson r.”

Inferential Statistics

In addition to describing and examining associations of variables within our data sets, we often

conduct research to answer questions about the greater population. Because it would not be

feasible to collect data from the entire population, researchers conduct research with

representative sample (as mentioned in the previous lessons) in an attempt to draw inferences

about the population from which the samples were drawn. The analyses used to examine these

inferences are appropriately referred to as inferential statistics.

Inferential statistics help us to draw conclusions beyond our immediate samples and data. For

example, inferential statistics could be used to infer, from a relatively small sample of

employees, what the job satisfaction is likely to be for a company’s entire work force. Basic

overview of several of the most widely used inferential statistical procedures, including the t-test,

analysis of variance (ANOVA), chi-square, and regression.

Page 60: Practical Research 2 (Q2)

114

▪ T-test

“T-tests are used to test mean differences between two groups. In general, they require

a single dichotomous independent variable (e.g., an experimental and control group) and

a single continuous dependent variable. For example, t-tests can be used to test for

mean differences between experimental and control groups in a randomized experiment,

or to test for mean differences between two groups in a nonexperimental context (such

as whether cocaine and heroin users report more criminal activity). When a researcher

wishes to compare the average (mean) performance between two groups on a

continuous variable, he should consider the t-test.”

▪ Analysis of Variance (ANOVA)

“ANOVA is also a test of mean comparisons just like T-test. In fact, one of the only

differences between a t-test and an ANOVA is that the ANOVA can compare means

across more than two groups or conditions.”

▪ Chi-square (2)

“The inferential statistics that we have discussed so far (i.e., t-tests and ANOVA) are

appropriate only when the dependent variables being measured are continuous (interval

or ratio). In contrast, the chi-square allows us to test hypothesis using nominal or ordinal

data. Similarly, chi-square analysis is often used to examine between-group differences

on categorical variables such as gender, marital status, or grade level.”

▪ Regression

“Linear regression is a method of estimating or predicting a value on some dependent

variable given the values of one or more independent variables. Like correlations,

statistical regression examines the association of relationship between variables. Unlike

with correlations, however, the primary purpose of regression is prediction.”

Page 61: Practical Research 2 (Q2)

115

Assessment Tasks

TASK NO. 1 (WRITTEN WORKS)

Instructions. Complete the following statements by filling the blanks with

appropriate answers. (50 points)

1. The word hypothesis consists of two words: hypo which means

___________________ and thesis which means _______________. (2pts)

2. One salient feature of hypothesis is that they must make a ________________.

3. ____________________ is a form of statistical hypothesis that is testable.

4. The main purpose of data collection is to verify ______________________.

5. ________________________ is an example of primary source.

6. ________________________ is a scale that measures in terms of names or

designations of discrete units.

7. Research instruments are the tools you use to _________________ on the topic

of interest to transform it into useful information.

8. Questionnaires can be used when certain barriers such as ________________

and literacy do not apply to your target group.

9. __________________ require the respondents to choose one or more from a

pre-defined category of ‘answers’ to the questions.

10. Interviews can be used when _______________________________.

11. In unstructured interview, the researcher have the _____________________ in

terms of content and structure.

12. Structured interview is where the researcher asks a ____________________

set of questions.

Page 62: Practical Research 2 (Q2)

116

TASK NO. 1 (WRITTEN WORKS)

13. Focus groups can be used when _______________________________________.

14. Validity is the extent to which the __________________ measures what it intends to

measure.

15. Face validity , _____________________, ___________________, ____________________, are

the most common methods for demonstrating validity. (3pts)

16. ________________, ___________________, _________________, _______________, are the

methods that can determine reliability. (4pts)

17. The process of selecting a representative sample from a target population is called

__________________.

18. _____________________ is considered to be a representative of a group.

19. A population is a group who shares the same ________________.

20. Randomization is a method of sampling in which each of the population has the

_____________ chance or probability of selection of the individuals for constituting a sample.

21. _____________ and ___________ are the two approaches in randomization. (2pts)

22. In _______________________, each sample unit of the population has only one chance to

be selected in the sample.

23. The groups that share distinct characteristics from each other is called

_________________.

24. The division of the sampling population into groups based upon visible or easily

identifiable characteristics is called _____________________.

25. In ______________________, the researcher has no way of predicting or guaranteeing that

each element of the population will be presented in the sample.

26. Quota sampling is the ____________ expensive way of selecting a sample.

Page 63: Practical Research 2 (Q2)

117

TASK NO 1. (WRITTEN WORKS)

27. __________________ is a sampling technique that takes people or other units that are

readily available.

28. The primary consideration in purposive sampling is your judgment as to who can

provide the _______________ to achieve the objectives of your study.

29. Snowball sampling is the process of selecting a sample using _________________.

30. The _______________ the sample, the better.

31. __________________ implies that subjects are made adequately aware of the type of

information you want from them, why the information is being sought, what purpose it will

put to, how they are expected to participate in the study, and how it will affect them.

32. Descriptive Statistics are used to _____________ the data collected in research studies

and to accurately characterize the variables under observation within a specific sample.

33. The most widely used measures of central tendency are _______, ________, and

___________.

34. The most widely used measures of dispersion are ___________, ___________, and

_____________. (3pts)

35. _______________ help us to draw conclusions beyond our immediate samples and data.

36. The most widely used inferential statistical procedures are _________, ____________,

_____________, and _____________. (4pts)

Page 64: Practical Research 2 (Q2)

118

TASK NO. 2

Instructions: Provide hypothesis for the following topic on the box provided.

1. Hypothesis 1: About Climate Change

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

2. About Public Transportation

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

3. About Education

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

4. About your assigned topic

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

Page 65: Practical Research 2 (Q2)

119

TASK NO. 3.

Instructions: Teacher Sarah is in-charge of preparing an annual report on the school’s

performance. She collected pertinent data on the variables below. Identify whether the

variables are qualitative or quantitative. Moreover, identify the measurement scale of the

data.

Data Type Scale

1. Nutritional status of a

student (underweight, normal,

overweight, obese)

2. Percentage of students who

leave school during the year of

any reason

3. Rank of school in the region

based on students’

performance on the National

Achievement Test

4. Town or city of students

enrolled in the current school

year.

5. Total number of students

enrolled in the current school

year.

6. Total number of textbooks

in the school that are not

already in good condition.

7. Whether or not the student

participated in the district,

division, regional or national

athletic meets.

Page 66: Practical Research 2 (Q2)

120

TASK NO. 3

Instructions: In each of the scenarios in this exercise, a researcher encounters a

measurement problem. Some of the scenarios reflect a poblem with the validity of a

measure. Others reflect a problem with a measure’s reliability—a problem that indirectly also

affects the measure’s validity. For each scenario, choose the most obvious problem from

among the following alternatives. Provide justification for your choice on the box provided.

∘ Face validity ∘ Interrater reliability

∘ Content validity ∘ Test-retest reliability

∘ Criterion validity ∘ Parallel form of same test

∘ Construct validity ∘ Informal consistency reliability

___________________1. “After using two different methods for teaching basic tennis skills to

non-tennis-playing adults, a researcher assess the effectiveness of the two methods by

administering a true-false test regarding the rules of the game.”

___________________2. “A researcher writes 120 multiple-choice questions to assess middle

school students’ general knowledge of basic word geography. To minimize the likelihood that

students will cheat on the test by copying one another’s answers, the researcher divides the

questions into three different sets to create three 40-item tests. In collecting data, the

researcher distributes the three tests randomly to students in any single classroom. After

administering the tests to students at many different middle schools, the researcher

computes the students’ test scores and discovers that student who answered one particular

set of 40 questions scoreed an average of 3 points higher than students who answered

either of the other two 40-question sets.”

Page 67: Practical Research 2 (Q2)

121

_________________3. “In order to determine what kinds of situations provoke aggression in

gorillas, two researchers observe montain gorillas in the Virunga Mountains of northwestern

Rwanda. As they watch a particular gorilla family and take notes about family members’

behaviors, the researchers often disagree about whether certain behaviors consitute

“aggression” or, instead, reflect more peaceful “assertiveness””

__________________4. “A researcher uses a blood test to determine people’s overall energy

level after drinking or not drinking a can of a high-cafferine cola drink. Unfortunately, when

two research assistants independently rate people’s behaviors for energy level for a 4-hour

period after drinking the cola, their results do not seem to have any correlation with the

blood-test results.”

__________________5. “In a two-week period during the semester, a researcher gains entry

into several college classrooms in order to administer a short survey regarding college

students’ beliefs about climate change. The survey consists of 20 statements about climate

change to which students voluntarily put their names on their surveys. Thanks to the names

on many survey forms, the researcher discovers that a few students were in two of the

classes surveyed and this completed the survey twice. Curiously, however, these students

sometimes gave different responses to particular statements on the two different occassions,

and hence their overall scores were also different.”

Page 68: Practical Research 2 (Q2)

122

_________________7. “A researcher develops and uses a questionnaire intended to measure

the extent to which college students dusplay tolerance woward a particular religious group.

However, several experts in the researcher’s field of study siggest that the questionnaire

measures not how tolerant students actually are, but what students would like to believe

about their tolerance for people of a particular religion.”

_________________8. “Students in an introductory college psychology course must satisfy

their “research methods” requirement in one of several ways; one option is to participate in a

research study called “Intelligence and Motor Skill Learning.” When students choosing this

option report to the laboratory, one of their tasks is to respond as quickly as possible to a

series of simple computer-generated questions. Afterward, the researcher debrief the student

about the nature of the study and tells them that the reaction-time measure was designed to

be a simple measure of intelligence. Some of the students object, saying, “That’s not a

measure of intelligence! Intelligence isn’t how quickly you can do something, it’s how well

you can do it.””

Page 69: Practical Research 2 (Q2)

123

TASK NO. 4

Instructions: Identify which among the sampling designs is the most appropriate to use in the

given scenarios. Write your answer on the space provided.

_____________________1. In evaluating certain teacher training institutes during the summer

of 1992, the researcher sampled from 300 people attendees of the seminar who he

happened to recognize.

______________________2. To determine the psychological effects of stress on gender, the

researcher decided to group the population of Poblacion II into male and female.

______________________3. The researcher wanted to investigate the attitude of college

students in the Philippines towards problems in higher education in the country. Higher

education institutions are spread out in every region of the country. In addition, there are

different types of institutions that exist. Within that institution, various courses are being

offered.

_____________________4. The researcher selected a sample of 20 male students in order to

find out the average age of the male students in the class. He decided to stand at the

entrance of the room, and whenever a male student enters the room, he ask sfor his age.

____________________5. A researcher wanted to know if his constructed learning material for

Trigonometry is effective to apply in a classroom. Instead of trying it out to students, he

decided to ask several Trigonometry teachers to evaluate his material.

____________________6. The researcher wanted to conduct a study involving previously

illegal immigrants who were never caught. To gather respondents, he asked his friend who

happened to know illegal immigrants.

____________________7. There are 50 students in a class and the researcher wants to select

10 students. After calculating the width of the interval, he selected the third element for every

five students.

Page 70: Practical Research 2 (Q2)

124

TASK NO. 5

Instructions: Construct your own research questionnaire for your assigned topic. Make sure

to follow the guidelines discussed in the lesson.

Page 71: Practical Research 2 (Q2)

125

TASK NO. 6

Instructions: Write your tentative research methodology on your assigned topic.

RESEARCH METHODOLOGY

Page 72: Practical Research 2 (Q2)

126

TASK NO. 7

Instructions: Write your tentative research methodology for your assigned topic.

Page 73: Practical Research 2 (Q2)

127

Summary

▪ Hypothesis is an educated and testable guess about the answer to your research

question.

▪ Three kinds and forms of hypothesis are declarative, directional, non-directional.

▪ The hypothesis being tested by a researcher is largely dependent on the type of

research design being used.

▪ Research seeks to discover underlying truths through data.

▪ Data can be categorized into primary and secondary data.

▪ Measurement limits the data of any phenomenon so that those data may be interpreted

and compared to a particular standard.

▪ Research instruments are administered on the sample subject for collecting evidences

or data.

▪ Research instruments are the tools you use to collect data on the topic of interest to

transform it into useful information.

▪ The main instrument to collect data through survey is questionnaires.

▪ Questionnaire is a set of standard questions, often called items, which follow a fixed

scheme in order to collect individual data about one or more specific topics.

▪ Open questions allow the respondent to insert his or her views, ideas, or suggestions

about the question posed.

▪ Closed questions require the respondents to choose one or more from a pre-defined

category of answers to the question.

▪ Interviews are often likened to a conversation between two people, though it requires

orchestrating, directing, and controlling to varying degrees.

▪ Focus groups are formally organized, structured groups of individuals brought together

to discuss series of topics during a specific period of time.

▪ Validity is the extent to which the instrument measures what it intends to measure.

▪ Face validity is the extent to which an instrument looks like it is measuring a particular

characteristic.

Page 74: Practical Research 2 (Q2)

128

▪ Content validity is the extent to which a measurement instrument is a representative

sample of the content being measured.

▪ Criterion validity is determined by the relationship between the measure and

performance on an outside criterion or measure.

▪ Construct validity is the extent to which an instrument measures a characteristic that

cannot be directly observed but it is assumed to exist based on patterns in peoples’s

behavior.

▪ Reliability refers to the consistency or dependability of a measurement technique, and it

is concerned with the consistency or stability of the score obtained from a measure or

assessment over time and across setting or conditions.

▪ Interrater reliability is the extent to which two or more individuals evaluating the same

product or performance give identical judgment.

▪ Test-retest reliability is where an instrument is administered on two separate occasions.

▪ Parallel form of same test is the extent to which two different versions of the same

instrument that were administered at different times yield similar results.

▪ Internal consistency reliability is the extent to which all of the items within a single

instrument yield similar result.

▪ A test can be reliable but not valid.

▪ Validity is more important that reliability.

▪ The most useful instrument is both valid and reliable.

▪ Sampling is the process of obtaining a few from a bigger group to become the basis for

estimating or predicting the prevalence of an unknown piece of information regarding the

bigger group.

▪ Randomization is a method of sampling in which each of the population has an equal

chance or probability of selection of the individuals for constituting a sample.

▪ Probability sampling is the extent to which every part of the population has the potential

to be represented in the sample.

▪ Simple random sampling is a process such that every member of the population has an

equal chance of being selected.

▪ Stratified random sampling involves sampling from different groups called strata.

Page 75: Practical Research 2 (Q2)

129

▪ Cluster sampling is based on the ability of the researcher to divide the sampling

population into groups.

▪ Quota sampling is a type of non-probability sampling is the researcher’s ease of access

to the sample population.

▪ Accidental sampling takes people or other units that are readily available and the

collection of data only stops when you reach the required number of respondents you

decided to have in the sample.

▪ Judgmental sampling is a non-probability sampling with a consideration as to who can

provide the best information to achieve the objectives of your study.

▪ Snowball sampling is the process of selecting a sample using networks.

▪ Systematic sampling involves selecting individuals according to a predetermined

sequence.

▪ The basic rule in sampling is the larger the sample, the better.

▪ There are certain behaviors in research that are considered unethical.

▪ The role of data analysis is to bring data together in a meaningful way and enable the

researchers to interpret or make sense of it.

▪ There are two major areas of statistical procedures—descriptive statistics and inferential

statistics.

▪ Descriptive statistics are used to describe the data collected in research studies and to

accurately characterize the variables under observation within a specific sample.

▪ The central tendency of a distribution is a number that represents the typical or most

representative value in the distribution.

▪ Measures of dispersion is another way of describing the shape of a distribution.

▪ Inferential statistics help us to draw conclusions beyond our immediate samples and

data.

References

Bloomberg, L. & Volpe, M. (2018). Completing Your Qualitative Dissertation. (4th ed.). Sage

Publications, Inc.

Page 76: Practical Research 2 (Q2)

130

Day, R. & Gastel, B. (2016). How to Write and Publish a Scientific Paper. (8th ed.). California:

Greenwood

De Matteo, D. et. al. (2005). Essentials of Research Design and Methodology. United States of

America: John Wiley & Sons, Inc.

Hinds. D. (2001). Research Instruments. In D. Wilkinson (Ed). The Researcher’s Toolkit The

Complete Guide for Practitioner Research. (pp. 41). Taylor & Francis e-Library.

Kumar, R. (2011). Research Methodology a step-by-step guide for beginners. New Delhi: SAGE

Publications India Pvt Ltd.

Lavraskas, P. (2008). Encyclopedia of Survey Research Methods. Sage Publications, Inc.

Leedy, P. & Ormrod, J. (2013). Practical Research Planning and Design. (10th ed). United States

of America: Pearson Education, Inc.

Singh, Y. (2006). Fundamental of Research Methodology and Statistics. New Delhi: New Age

International (P) Limited, Publishers.

Ong Kian Koc, B, (1999). EDSC 341 Research Seminar in Science Education. UP Open

University: Office of Academic Support and Instructional Services.

University of the Witwatersrand. (2020). Research Support: Research Methodology. Retrieved

from: https://libguides.wits.ac.za/c.php?g=693518&p=4914913.

Wilkinson, D. & Birmingham, P. (2003). Using Research Instruments A Guide for Researchers.

United States of America: Taylor & Francis e-Library.