Contact the Centre for Research in English Language Learning and Assessment

Please use the enquiry form to get in touch with CRELLA

Address

Professor Tony Green
Director and Professor in Language Assessment
CRELLA
University of Bedfordshire
Putteridge Bury
Hitchin Road
Luton
Bedfordshire
UK
LU2 8LE

T: +44 (0)1582 489086
tony.green@beds.ac.uk

CRELLA's socio-cognitive framework for language testing

CRELLA's socio-cognitive framework

The socio-cognitive framework developed by CRELLA for test development and validation research marks the first systematic attempt at a coherent approach to these activities, combining social, cognitive and evaluative (scoring) dimensions of language use and linking these in turn to the context and consequences of test use. These pages describe the framework and identify publications which have used it or referenced it.

The socio-cognitive framework identifies the evidence required to develop a transparent and coherent validity argument, while at the same time addressing the interaction between different types of validity evidence. The framework has influenced leading examination boards and testing projects worldwide, often prompting test developers and providers to revisit their test designs and reframe their validity arguments in new and more effective ways.

This page describes the background, key features, and uses of the framework. You can also use the links below to find some of the many  publications which have used or referenced the socio-cognitive framework:

Background and origins of the socio-cognitive framework

The theoretical framework and its basis in research were first elaborated in Cyril Weir'sLanguage Testing and Validation (2005) Language testing and validation: An evidence-based approach (Palgrave MacMillan, Basingstoke) which is the culmination of his research and its practical exegesis in courses and seminars in over 30 countries worldwide from 1990 to 2005. The origins of the framework can be traced in earlier academic works by Weir (see Communicative Language Testing (1990) and Understanding and Developing Language Tests(1993). Weir's collaborative work in China over a 10-year period as senior UK consultant on the national College English Test and the Test for English Majors involved developing a clearer specification of the operations and conditions underlying language performance. 

From the early 2000's Weir's close involvement with the Cambridge English examinations in the UK enabled him to develop an early version of the framework which took account of both recent testing theory and current testing practice. Since the 2005 publication, the socio-cognitive framework has continued to evolve and has been steadily refined as it has been applied in numerous contexts of assessment practice in the UK and throughout the world (see Weir C. J. and B, O'Sullivan (2011) Test development and validation. In O'Sullivan, B. (ed.) Language Testing: Theories and Practices, Basingstoke: Palgrave 'Advances in Language and Linguistics' series). The framework  has also been directly informed and shaped by the research activities of individual Applied linguistics and language testing specialists at CRELLA over the past 5 years.

Meeting the demands of test validity and test validation

Nowadays test stakeholders, i.e. test takers, teachers and other test score users such as university admissions staff or policy makers, increasingly expect to be provided with explicit evidence of how test developers are meeting the demands of validity in the tests they offer in the public domain. There is growing awareness among stakeholders of the value of having not only a sound theoretical model to underpin any test but also a means of generating adequate evidence on how that model is operationalised and interpreted in practice. In particular, stakeholders often seek guidance on how to develop tests targeted at specific populations or domains of use. 

Others want to know how test developers determine and control criterial distinctions between tests offered at different levels on the proficiency continuum, or how they establish the cut scores which are claimed to be indicators of certain levels. The socio-cognitive framework offers precisely this sort of theoretical model for test design and development combined with a practical and achievable methodology for generating the evidence needed to support claims about the test's real world usefulness.

The key components of the socio-cognitive framework

The model comprises a number of components each of which must be attended to by the test developer at one or more points of the test development, implementation and validation cycle. Components relating to the Test taker and to Cognitive validity represent the candidate in the test event. They concern the individual language user and their cognitive or mental processing abilities since individual characteristics will directly impact on the way an individual processes the test task. 

The component of Context validity concerns the contextual parameters of a task, which are often socially or externally determined in terms of the demands of the task setting, with its specified input and expected output. Scoring validity, i.e. how the task performance is evaluated, is the component which combines with Cognitive and Context validity in an interactive, symbiotic relationship to constitute the overall construct validity of any test. 

Two additional components in the model are Criterion-related validity and Consequential validity, which derive their value from the successful realisation by the test developer of construct validity. While these multiple components are presented as being independent of one another for purposes of transparency and focus, they offer a comprehensive and coherent perspective on the process of test development and validation activity which looks both inwards, at the internal nature and quality of the test, and outwards, at the immediate world in which the test is located with all its implications for appropriate score interpretation and ethical test use.

Using the socio-cognitive framework in practice

The framework has already been extensively used across a range of test development and validation projects. One major international test provider to have applied the framework to its examinations is University of Cambridge Examinations. The main focus of the work at Cambridge English Language Assessment has been the gathering of validity evidence for its general English tests. This work has led to the publication of four volumes in the Studies in Language Testing series on the board's approaches to assessing writing, reading, speaking and listening skills:

Silt26

Shaw, S.D. and Weir, C. J. (2007) Examining Writing: Research and practice in assessing second language writing. Studies. Studies in Language Testing 26, Cambridge: UCLES/ CUP

                                                                    

SiLT 29Khalifa, H and Weir, C.J. (2009) Examining Reading: Research and practice in assessing second language reading, Studies in Language Testing 29, Cambridge: UCLES/ CUP

                                                          

SiLT 30

Taylor, L. (ed) (2011) Examining Speaking:  Research and practice in assessing second language speaking, Studies in Language Testing 30, Cambridge: UCLES/ CUP                                                       

                         

SilT 35

Geranpayeh, A. and Taylor, L.  (2013) Examining Listening : Research and practice in assessing second language listening, Studies in Language Testing 35, Cambridge: UCLES/ CUP                                                                                                                                                                                    

The framework has also been successfully applied by Cambridge English Language Assessment to specific-purpose tests in diverse domains, for example tests of legal, financial and business English, and to a Teaching Knowledge Test used in the training, certification and professional development of language teachers.

Others have applied the framework across a wide range of international contexts and for a variety of purposes, in the assessment of both English and other languages. 

  • In the Baltic States it formed the basis for building the test specifications for a generic specific-purpose, test of English in higher education.
  • In Mexico it was used to create a set of English tests related to the Common European Framework of Reference (CEFR). 
  • A similar approach was adopted in the United Arab Emirates to generate tests at six proficiency levels for university preparation. 
  • Other projects are currently under way with institutions as far apart as Saudi Arabia (at the pre-university level) and Malaysia (at pre- and exit university levels).
  •  With consultancy input from CRELLA, university authorities in the former Yugoslav Republic of Macedonia used the socio-cognitive framework to develop the first national tests of Macedonian as a Foreign Language.
  • The Goethe Institut found it useful when revising its higher level examinations in German as a Foreign Language.

Basing test specifications in this way on a clearly defined model of validation provides a platform from which to drive the formal collection of evidence for constructing a sound validity argument. Another valuable application of the framework has been its use by examination boards in the UK, Turkey, Mexico, Taiwan and Japan as a theoretical basis for CEFR linking projects. Many of these projects have been written up for publication and a comprehensive list of references can be found on the CRELLA website, e.g.:

 The socio-cognitive framework has also been used to supply the theoretical and practical basis for training and professional development courses. CRELLA staff offer regular training courses for language testing professionals (e.g. exam board personnel) worldwide. Within the UK, this includes courses and seminars for Cambridge ESOL, the University of Central Lancashire and the Defence School of Languages. In association with the Association of Language Testers in Europe (ALTE), CRELLA members have given courses several times a year since 2005 throughout Europe. ALTE represents the key foreign language examination providers across 24 European nations and covering 27 major European languages. As well as offering a basis for effective training and professional development among testing professionals, the framework has a potentially valuable role to play in teacher education and in classroom applications within the wider pedagogic community. For example, it will feature strongly in a teacher training module on assessment literacy to be used in institutions throughout Russia.

Now that the socio-cognitive framework is well-established and is being implemented in a wide variety of language testing contexts around the world, the body of research into its impact and effectiveness is steadily growing. A significant contribution towards this is being made by individual members of CRELLA's growing team of research specialists, as well as by CRELLA's cadre of PhD students, many of whom choose to focus their investigation on their own national tests or testing systems and their relationship with the Common European Framework.

References

  • Geranpayeh, A. and Taylor, L. (Eds) (2013). Examining Listening: Research and practice in assessing second language listening. Studies in Language Testing 35, Cambridge: UCLES/Cambridge University Press.
  • Khalifa, H. and Weir, C.J. (2009). Examining Reading: Research and practice in assessing second language reading. Studies in Language Testing 29, Cambridge: UCLES/Cambridge University Press.
  • O'Sullivan, B. and Weir, C. J. (2011). Test development and validation. In O'Sullivan, B. (Ed.) Language Testing: Theories and Practices, Basingstoke: Palgrave Macmillan, 13-32.
  • Shaw, S.D. and Weir, C. J. (2007). Examining Writing: Research and practice in assessing second language writing. Studies in Language Testing 26, Cambridge: UCLES/Cambridge University Press.
  • Taylor, L. (Ed.) (2011). Examining Speaking: Research and practice in assessing second language speaking. Studies in Language Testing 30, Cambridge: UCLES/Cambridge University Press.
  • Taylor, L. and Geranpayeh, A. (2011). Assessing listening for academic purposes: defining and operationalising the test construct. Journal of English for Academic Purposes, 10/2, 89- 101.
  • Weir, C. J. (1990). Communicative Language Testing. London: Prentice Hall.
  • Weir, C. J. (1993). Understanding and Developing Language Tests. London: Prentice Hall.
  • Weir, C. J. (2005). Language Testing and Validation: an Evidence-Based Approach. Basingstoke: Palgrave Macmillan.
Bedfordshire University

Socio-Cognitive Framework