Why we need frameworks to evaluate our learners’ English
Wednesday, August 06, 2014
The face of college campuses in the United States is changing. More students from other countries are enrolling in American colleges and universities today than ever before.
According to the U.S. News and World Report, 819,644 international students enrolled in undergraduate and graduate programs during the 2012-2013 academic year. In 2003, that total number was 572,509, which is an increase of over 30 percent.
Naturally, this has led to a nationwide spike in demand for more English language education courses for international students. One essential, but challenging, issue that educational institutions serving the TESOL community must face is establishing assessment protocols that will serve academic departments and their students throughout the learning process.
Many departments choose to standardize their own assessments, while others leave part or all of the responsibility to adjunct lecturers and a few full-time faculty members. In either case, difficulties often arise because these assessments can lack uniformity within an institution.
Moreover, they almost never translate to something with real-world value when students transfer to other schools, apply for graduate programs, or seek employment after graduation.
An additional challenge of meeting the needs of these students comes from the growing number of departments that are staffed mostly or completely with adjunct instructors. In the past, most educational institutions built up their departments with a large number of full-time, tenured faculty members who were tasked with being at the forefront of research in their fields and instructing students.
Today’s colleges and universities are mostly staffed with part-time instructors lacking both the time and resources to conduct their own studies in the best practices of their fields, such as assessment. Thus, many of today’s TESOL departments simply do not have the ability to develop the best possible system of evaluation for their students.
Many universities require international students to take the Test of English as a Foreign Language, but most of them also use some other evaluative measures to place students in their courses.
Nearly all academic institutions use other assessments to monitor progress within a program and at the end of a course of study. Some TESOL departments use frameworks they have created, while others base their evaluative measures on internationally recognized proficiency frameworks like the American Council on the Teaching of Foreign Languages Language Proficiency Guidelines , Interagency Language Roundtable Skill Level Descriptors, or the Council of Europe’s Common European Framework of Reference for Languages.
There is often a great disparity in quality from class to class within a single department when there is no set method of evaluating the language proficiency of students. To address this issue, many educators have come to realize that the use of established frameworks, at least as a foundation, is the best option.
At Brigham Young University’s Center for Language Studies, for instance, the Intensive English Program had previously used an oral interview to determine placement for students entering the program and when evaluating them at the end of the semester, according to Dr. Troy Cox, who spent 17 years working in the program.
The biggest problem with that method, notes Cox, was teacher variability in grading and scoring at an educational institution that has many novice teachers. This led his department to decide to base its evaluations on the ACTFL framework, and to recommend it to BYU Hawaii when the question of choosing between ACTFL and CEFR guidelines to evaluate students came up for that institution.
Cox finds the CEFR to be helpful for teachers in that it breaks down all the different tables, getting very specific about which precise category and subcategory of proficiency can be rated. However, he says, the problem with CEFR’s framework is that it is so vast, and there are so many rating tables, that it is difficult to determine a holistic evaluation.
Instead, it gets extremely specific about each aspect and subaspect of language learning. While this may be helpful to teachers, it can be unwieldy when attempting to give a complete evaluation of an individual’s language skills.
Therefore, there can be some confusion about how to actually score someone when using the CEFR framework.
In contrast, the ACTFL Guidelines can be explained, according to Cox, with this analogy: “When you’re buying a men’s shirt, you can get a tailored shirt with all your specific measurements, or you can get small, medium or large. You know none of the smalls, mediums and larges will fit perfectly, but you have the ease and efficiency of getting something fairly quickly that fits quite well.”
Using frameworks to develop a system of evaluation is not necessarily a simple task, according to Susan Greene, testing coordinator of Princeton University’s English Language Program.
“It is a constantly evolving process, involving lots of dialogue within the department, within TESOL, and with colleagues from other schools,” she says. “We are constantly comparing performance tests, testing procedures, and assessment criteria. Many of us lack complete confidence in the TOEFL, so there is an ongoing discussion of how to improve the evaluative process.”
Moreover, the need for well-developed evaluative measures is not only a critical component of servicing current students; it is also essential to the reputation of the educational institution when students graduate. Greene adds, “Princeton Trustees have decreed that an advanced degree from the university should indicate a certain level of proficiency in English as well as robust knowledge of a specific discipline.”
Almost all educational institutions would like to do a more consistent job of assessing the English skills of incoming students, marking their progress as they go through a program, and evaluating their final level of academic achievement, but the cost of developing such an assessment process is too time consuming and expensive for most schools.
This is yet another reason why many institutions have gravitated toward already established proficiency frameworks that have been used for decades both nationally and internationally to guide them through this process.
As the number of international students continues to increase, U.S. colleges and universities will need to look at adopting standard proficiency frameworks like ACTFL, CEFR and ILR. Frameworks provide a common metric for universities to set standard requirements, establish goals, evaluate student progress, and identify effective teaching and learning practices for this growing population of international English language learners in the United States.
- The importance of guided practice in the classroom
- Fostering STEM vocabulary development in ESL students
- Grouping students: Heterogeneous, homogeneous and random structures
- Comprehension: Do your English learners understand your instruction?
- The power of social media in language acquisition
- Just how serious is the tech world about diversity?
- Working memory in English language development
- Grading practices that better support 21st‑century learning
- Zoomable contacts could be next eye-opener in vision technology
- IT leaders feel ill-equipped to handle escalating cyberattacks
- Governor tells Alaskans ‘don’t panic’ as state looks beyond oil
- Nurses and the culture of injury on the job
- Study: Paramedics could administer drugs for ischemic stroke earlier
See your work in future editions
Your content, Your Expertise,
Your Industry Needs YOUR Expert Voice & We've got the platform you needFind Out How