Upload
amie-green
View
217
Download
0
Tags:
Embed Size (px)
Citation preview
Close Reading Strategies
8 Steps• Number paragraphs• Survey the text• Chunk the Text• Highlight and Underline with purpose• Write in margins• Outline & Summarize• Form your own opinion• Compare & Contrast with related readings
Step One: Number the paragraphsThe writing test will ask you to be
able to cite and refer to the text. One simple way to do this is by numbering each paragraph, section or stanza in the left hand margin. When you refer to the text, state which paragraph you are referring to.
Step Two: Surveying the Text
What is the title of the text? Who is the author? Is there
any biographical information? Describe any visuals in the section you have been asked to read. What do the visuals of the text tell you about the author’s purpose, tone or the organization of the text? Assess your overall
impression of the text, number of paragraphs, layout,
visuals, charts, graphics, etc.).
A Win for the Robo-Readers
Large study shows little difference between
human and robot essay graders
April 13, 2012
By
Steve Kolowich
A Win for the Robo-Readers April 13, 2012 By Steve Kolowich Education technology has long since delivered on its promise
of software that can grade most student work in lieu of instructors or teaching assistants. These days, debates about artificial intelligence in education are more likely to revolve around whether automatons can be relied upon to teach students new concepts.
Yet when it comes to English composition, the question of whether computer programs can reliably assess student work remains sticky. Sure an automaton can figure out if a student has done a math or science problem by reading symbols and ticking off a checklist, writing instructors say. But can a machine that cannot draw out meaning, and cares nothing for creativity or truth, really match the work of a human reader?
In the quantitative sense: yes, according to a study released Wednesday by researchers at the University of Akron. The study, funded by the William and Flora Hewlett Foundation, compared the software-generated ratings given to more than 22,000 short essays, written by students in junior high schools and high school sophomores, to the ratings given to the same essays by trained human readers.
The differences, across a number of different brands of automated essay scoring software (AES) and essay types, were minute. “The results demonstrated that over all, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items,” the Akron researchers write, “with equal performance for both source-based and traditional writing genre.”
“In terms of being able to replicate the mean [ratings] and standard deviation of human readers, the automated scoring engines did remarkably well,” Mark D. Shermis, the dean of the college of education at Akron and the study’s lead author, said in an interview.
The Akron study asserts that it is the largest and most comprehensive investigation of its kind, although it is hardly the first. Smaller studies of specific automated-essay-scoring products have reported similarly high fidelity between human and machine scores on writing samples.
But independent reviews of AES products have been rare and occasionally critical. Les Perelman, director of the Writing Across the Curriculum program at the Massachusetts Institute of Technology, has crusaded against automated essay grading by writing and speaking widely of his own, successful efforts to fool the Educational Testing Services’ e-Rater, which has been used to grade the GRE and the Collegiate Learning Assessment (CLA), into giving good scores to incoherent essays carefully crafted by Perelman to exploit its flaws.
In higher education, AES products are still used primarily to grade students’ writing on standardized tests and placement exams, and have not yet found their way into many composition classrooms, Perelman told Inside Higher Ed in an interview. But with demand for writing education rising amid a surge in enrollments among non-native English speakers, triumphant studies such as the Akron researchers’ might embolden some overenrolled, understaffed community colleges to consider deploying AES for its composition classes, he says.
That would be a mistake, Perelman says, pointing to a 2008 study by researchers in southern Texas. Those researchers compared machine scores to human ones on essays written by 107 students in a developmental writing course at South Texas College, a community college near the Mexico border that is 95 percent Hispanic. They found no significant correlation.
.
Read more: http://www.insidehighered.com/news/2012/04/13/large-study-shows-little-difference-between-human-and-robot-essay-graders#ixzz2yK7a2VsX
Inside Higher Ed
Shermis, the lead author of the Akron study, says thrift-minded administrators and politicians should not take his results as ammunition in a crusade to replace composition instructors with AES robots. Ideally, educators at all levels would use the software “as a supplement for overworked [instructors of] entry-level writing courses, where students are really learning fundamental writing skills and can use all the feedback they can get.”
The Akron education dean acknowledges that AES software has not yet been able to replicate human intuition when it comes to identifying creativity. But while fostering original, nuanced expression is a good goal for a creative writing instructor, many instructors might settle for an easier way to make sure their students know how to write direct, effective sentences and paragraphs.
“If you go to a business school or an engineering school, they’re not looking for creative writers,” Shermis says. “They’re looking for people who can communicate ideas. And that’s what the technology is best at” evaluating
Predicting Genre
What do you know about this genre?
How will the text be shaped and developed?
Nonfiction
Biography/Autobiography Narrative of a person's life, a true story about a real person.
Essay A short literary composition that reflects the author's outlook or point.
Narrative Nonfiction Factual information presented in a format which tells a story.
Nonfiction Informational text dealing with an actual, real-life subject.
Speech Public address or discourse.
Which Genre?http://
www.insidehighered.com/news/2012/04/13/large-study-shows-little-difference-between-human-and-robot-essay-graders
Fiction Genres All Fiction
Drama Stories composed in verse or prose, usually for theatrical
performance, where conflicts and emotion are expressed through dialogue and action.
Fable Narration demonstrating a useful truth, especially in which
animals speak as humans; legendary, supernatural tale.
Fairy Tale Story about fairies or other magical creatures, usually for
children.
Fantasy Fiction with strange or other worldly settings or characters;
fiction which invites suspension of reality.
Fiction Narrative literary works whose content is produced by the
imagination and is not necessarily based on fact.
Fiction in Verse Full-length novels with plot, subplot(s), theme(s), major and
minor characters, in which the narrative is presented in (usually blank) verse form.
Folklore The songs, stories, myths, and proverbs of a people or
"folk" as handed down by word of mouth.
Historical Fiction Story with fictional characters and events in a historical
setting.
Horror Fiction in which events evoke a feeling of dread in both
the characters and the reader.
Humor Fiction full of fun, fancy, and excitement, meant to
entertain; but can be contained in all genres.
Legend Story, sometimes of a national or folk hero, which has
a basis in fact but also includes imaginative material.
Mystery Fiction dealing with the solution of a crime or the
unraveling of secrets.
Mythology Legend or traditional narrative, often based in part on
historical events, that reveals human behavior and natural phenomena by its symbolism; often pertaining to the actions of the gods.
Poetry Verse and rhythmic writing with imagery that creates
emotional responses.
Realistic Fiction Story that can actually happen and is true to life.
Science Fiction Story based on impact of actual, imagined, or potential
science, usually set in the future or on other planets.
Short Story Fiction of such brevity that it supports no subplots.
Tall Tale Humorous story with blatant exaggerations,
swaggering heroes who do the impossible with nonchalance.
Step Three: Chunk the Text Breaking up the text into smaller sections (or
chunks) makes the page much more manageable.
Chunk paragraphs 1-3, 4-6, 7-9, 10-12.You look at the paragraphs to see where natural
chunks occur. Paragraphs 1-3 may be the hook and thesis statement, 6-7 establishes support for thesis, while 7-9 may be the paragraphs where the author addresses the opposition.
It is important to understand that there is no right or wrong way to chunk the text, as long as you can justify why you grouped certain paragraphs together.
Predict the main idea while chunking
Based on the title what do you think this is about?
Read the first and last sentence of the text. Has your prediction changed?
What new information did you learn about the article?
Do we learn anything new?
Introduction Conclusion
Education technology has long since delivered on its promise of software that can grade most student work in lieu of instructors or teaching assistants. These days, debates about artificial intelligence in education are more likely to revolve around whether automatons can be relied upon to teach students new concepts.
“If you go to a business school or an engineering school, they’re not looking for creative writers,” Shermis says. “They’re looking for people who can communicate ideas. And that’s what the technology is best at” evaluating.
4. Highlight or Underline… with a purpose
What you highlight may change depending on the text type.
When studying an argument, ask students to underline “claims”. We identify claims as belief statements that the author is making.
Underline key termsUnderline or circle “Key terms” in
the text: Key terms are words that
◦1. Are defined.◦ 2. Are repeated throughout the text. ◦3.Circle the names of sources, power
verbs, or figurative language.If you only circled five key terms in the entire text, you would have a pretty good idea about what the entire text is about.
VocabularyWhich of these words are key
terms?
In lieu Automatons Ticking off Quantitative Software-generated Source-based Genre Replicate Standard deviation Intuition Fostering
Nuanced intuition Comprehensive Fidelity Curriculum Incoherent Exploit Surge Triumphant Embolden Deploying Correlation Thrift-minded Supplement Fundamental
5. Write in the marginsThis is where the chunking comes into play.
Left margin: summarize each chunk-write summaries in 10-words or less.
5. Right margin: Dig deeper into the text ◦Use a power verb to describe what
the author is DOING. (For example: Describing, illustrating, arguing, etc..)
◦Represent the information with a picture.
◦Ask questions.
6. Outline and summarizing: Identifying the main
ideas and restating them in your own words.
Use informational text analysis worksheet to help you practice organizing ideas and forming your own opinions based on the writers information. ◦Identify main ideas◦What details/evidence supports the
author’s claims◦Was the evidence convincing? Why
or why not?
7. Form your own opinionDid the author present a good
argument?Were the supporting details/
evidence valid and convincing?Was the essay well written? Was this information useful? Can
you apply the information to anything in your life?
8. Compare and contrast related readings: Exploring likenesses and differences between texts to understand them better and draw your own conclusion based on the information
1. Education technology has long since delivered on its promise of software that can grade most student work in lieu of instructors or teaching assistants. These days, debates about artificial intelligence in education are more likely to revolve around whether automatons can be relied upon to teach students new concepts.
2. Yet when it comes to English composition, the question of whether computer programs can reliably assess student work remains sticky. Sure an automaton can figure out if a student has done a math or science problem by reading symbols and ticking off a checklist, writing instructors say. But can a machine that cannot draw out meaning, and cares nothing for creativity or truth, really match the work of a human reader?
3. In the quantitative sense: yes, according to a study released Wednesday by researchers at the University of Akron. The study [1], funded by the William and Flora Hewlett Foundation, compared the software-generated ratings given to more than 22,000 short essays, written by students in junior high schools and high school sophomores, to the ratings given to the same essays by trained human readers.
4. The differences, across a number of different brands of automated essay scoring software (AES) and essay types, were minute. “The results demonstrated that over all, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items,” the Akron researchers write, “with equal performance for both source-based and traditional writing genre.”
5. “In terms of being able to replicate the mean [ratings] and standard deviation of human readers, the automated scoring engines did remarkably well,” Mark D. Shermis, the dean of the college of education at Akron and the study’s lead author, said in an interview.
6. The Akron study asserts that it is the largest and most comprehensive investigation of its kind, although it is hardly the first. Smaller studies of specific automated-essay-scoring products have reported similarly high fidelity between human and machine scores on writing samples.
7. But independent reviews of AES products have been rare and occasionally critical. Les Perelman, director of the Writing Across the Curriculum program at the Massachusetts Institute of Technology, has crusaded against automated essay grading by writing and speaking widely of his own, successful efforts to fool the Educational Testing Services’ e-Rater, which has been used to grade the GRE and the Collegiate Learning Assessment (CLA), into giving good scores to incoherent essays carefully crafted by Perelman to exploit its flaws.
8. In higher education, AES products are still used primarily to grade students’ writing on standardized tests and placement exams, and have not yet found their way into many composition classrooms, Perelman told Inside Higher Ed in an interview. But with demand for writing education rising amid a surge in enrollments among non-native English speakers, triumphant studies such as the Akron researchers’ might embolden some overenrolled, understaffed community colleges to consider deploying AES for its composition classes, he says.
9. That would be a mistake, Perelman says, pointing to a 2008 study by researchers in southern Texas. Those researchers compared machine scores to human ones on essays written by 107 students in a developmental writing course at South Texas College, a community college near the Mexico border that is 95 percent Hispanic. They found no significant correlation.
10. Shermis, the lead author of the Akron study, says thrift-minded administrators and politicians should not take his results as ammunition in a crusade to replace composition instructors with AES robots. Ideally, educators at all levels would use the software “as a supplement for overworked [instructors of] entry-level writing courses, where students are really learning fundamental writing skills and can use all the feedback they can get.”
11. The Akron education dean acknowledges that AES software has not yet been able to replicate human intuition when it comes to identifying creativity. But while fostering original, nuanced expression is a good goal for a creative writing instructor, many instructors might settle for an easier way to make sure their students know how to write direct, effective sentences and paragraphs.
12. “If you go to a business school or an engineering school, they’re not looking for creative writers,” Shermis says. “They’re looking for people who can communicate ideas. And that’s what the technology is best at” evaluating.