One of our partner school’s English classes are using informational texts right now to conduct classroom debates on transgender troops and North Korean nuclearization. In a lesson modeling arguments to support positions on these issues, a student built an argument that banning transgender troops would weaken the U.S. military by shrinking it. The student had evidence from the New York Times Upfront Magazine on the number of transgender troops currently serving. She didn’t have much in the way of reasoning in her model. So we had all the students perform a quick-write to supply the missing reasoning. We looked at a couple of student samples, using a document camera, comparing their strengths and weaknesses, and ending with a synthesized model of reasoning.
About five years ago, when we got started with the work we are doing now – first calling it curricular debate, then argument-centered instruction – partner school teachers and administrators asked us two simple but important questions: what are the specific professional capacities that Argument-Centered Education will develop and how will we know that they have, indeed, developed? We undertook the kind of inquiry-driven, analytical process that we try to build into the curriculum that we design with partners on our own argument pedagogy, and we produced the Observation of Argument-Centered Instructional Capacity (OACIC) Inventory.
Teachers and administrators from Argument-Centered Education partner schools have made an important request over the past couple of years: since we cannot expect students to learn the difficult skills of academic argumentation all at once, how can these skills be taught and built in sequence?
This is an argument-centered writing assessment designed to develop students’ critical thinking and writing skills, in an in-class and on-demand setting. Both because of the college-directed rigor of the preparatory work that students will be immersed in, and because of the conditions under which students will be writing, this assessment is also highly aligned with the new SAT exam, so it is an authentic, properly embedded and seamless part of an instructional strategy that has raising students’ college exam scores as one of its objectives. And finally it is document-based, so it parallels the kind of AP Exam writing that many students will be striving to master.
Issue: The United States should significantly restrict immigration from the Middle Eastern due to the possibility of terrorism.
By Gerald Graff and Cathy Birkenstein
[Adapted from a talk presented at a session on “Standardization and Democratization in College Writing Programs” at the NCTE Conference on College Composition and Communication, April 7, 2016, in Houston Texas.]
After a rocky start, higher education has come to embrace outcomes assessment. When Gerald was President of MLA in 2008 he caught a lot of flak for a pro-assessment column in the MLA Newsletter entitled “Assessment Changes Everything.” Now, eight years later, the outrage has largely dissipated. As Chris Gallagher suggests in a 2012 College English article, “OA now seems like educational common sense. Define goals for student learning, evaluate how well students are achieving those goals, and use the results to improve the academic experience. Who could argue with that?” Gallagher does go on to argue with Outcomes Assessment, citing some dangers that he sees in it. But he accepts the need for outcomes assessment in principle, as do most of us.