
International Psychometric Conferences
hosted by
The Psychometrics Centre in Cambridge, England
- International Association for Computer Adaptive Testing (IACAT) 2015
- International Meeting of The Psychometric Society (IMPS) 2009
The Psychometric Society
The Psychometric Society was founded in 1935 at AnnArbor, Michigan, primarily by L. L. Thurstone but also John Stalnaker, Marion W. Richardson and Jack W. Dunlap.
The International Association for Computer Adaptive Testing
The International Association for Computer Adaptive Testing (IACAT) was formed at the beginning of 2010. IACAT grew out of the The GMAC® Computerized Adaptive Testing Conferences held in 2007 and 2009 in Minneapolis, MN.
International Conferences organized by
The Psychometric Society
The first organizational meeting of the Psychometric Society took place on September 4, 1935, at Ann Arbor, Michigan, during the session of the American Psychological Association. According to Dunlap (1942), the founders of the society initially came together, not to start a society but to start the journal Psychometrika. Paul Horst’s failed attempt to find a journal devoted to quantitative methods as applied to education and psychology led him to discuss the matter at length with Kurtz and to carefully examine the possibility of such a journal with Thurstone and Richardson in 1933. Dunlap was brought on board because of his connection with the Journal of Educational Psychology. In the spring and summer of 1934, Horst, Kurtz, Richardson and Stalnaker worked on details of the journal such as costs and publishers. Plans by these six men to start the journal crystallized during the 1934 fall meeting of the American Psychological Association at Columbia University. At this meeting, Kurtz began to emphasize that readers of the journal would be likely to be interested in forming a society.
The first International Meeting of the Psychometrics Society was held in Osaka, Japan in 2001, although several meetings outside the USA had taken place before that in Europe.
The International Association for Computer Adaptive Testing
iACAI is focused on scientific and educational advocacy for computerized adaptive testing, and as an international organization, the encouragement of CAT around the world. The International Association for Computerized Adaptive Testing (IACAT) is an organization that is incorporated exclusively for scientific, educational, literary, and charitable purposes. Its mission is to:
- Advance the science of adaptive testing in all fields of applied psychological and educational measurement;
- Improve adaptive instruments and procedures for their administration, scoring, interpretation, and use;
- Improve applications of adaptive assessment of individuals and evaluations of assessment programs.
- Develop theory, techniques, technologies and instrumentation available for adaptive measurement in all relevant human, institutional, and social characteristics;
- Develop procedures appropriate to the interpretation and use of such technologies and instruments;
- Advance applications of adaptive measurement in individual and group evaluation studies.
And to achieve these aims, IACAT will:
- organize international meetings and discussions;
- promote the publication of relevant information by means of its own and other publication outlets;
- stimulate international cooperation on research projects relevant to a scientifically and ethically sound use of adaptive testing;
- be available to act as an intermediary in international negotiations concerning the publication and marketing of adaptive tests;
- advance professional development and work to raise standards governing test development and use.
Future Directions in Computer Adaptive Testing (CAT) with Machine Learning Integration
The integration of machine learning into Computer Adaptive Testing (CAT) is set to transform the landscape of educational assessment by enhancing its applicability, interpretability, and multi-dimensionality. These advancements promise not only to refine the accuracy of assessments but also to provide deeper insights into an examinee’s potential for future learning and problem-solving strategies.
Enhancing Multi-Dimensionality in Assessments
Future CAT systems will likely leverage machine learning to analyze a broader array of assessment data, including response times and interactions with test interfaces (e.g., mouse movements). This can provide valuable insights into the cognitive processes and engagement levels of examinees. Furthermore, by incorporating data from an examinee’s previous interactions with educational content, CAT can offer a longitudinal view of an individual’s learning trajectory, thereby predicting their readiness for new concepts. Analyzing diverse types of educational content, such as textual, visual, and auditory materials, machine learning can enable a more comprehensive understanding of how examinees process complex information.
Multi-Stage Testing (MST)
MST, a variant of CAT used in high-stakes exams, presents groups of questions (testlets) at each step of the assessment, based on a pre-designed decision tree. Unlike traditional CAT, which selects single questions, MST’s group-based approach requires intricate design considerations, such as the number of testlets and the interaction between questions within a testlet. There is a growing need for research into automated algorithms for MST construction to facilitate its broader adoption in large-scale testing environments.
Generative AI in CAT
The application of Generative AI, particularly through Large Language Models (LLMs), is beginning to influence CAT systems. These models offer the potential to enhance question selection, tailor proficiency assessments, and dynamically generate novel test questions that adapt to the examinee’s demonstrated abilities. This capability could lead to more personalized and effective testing experiences.
Explainable Machine Learning and CAT
While traditional CAT systems are known for their interpretability, the incorporation of deep learning presents challenges due to its often opaque decision-making processes. Bridging the gap between advanced machine learning capabilities and the need for explainable, transparent assessment methods is crucial, especially in high-stakes testing scenarios.
CAT for AI System Evaluation
Beyond educational applications, the adaptive and efficient nature of CAT makes it a promising tool for evaluating AI systems, including advanced LLMs. CAT methodologies could be adapted to assess the cognitive-like functions of AI models, providing a nuanced understanding of their capabilities and facilitating efficient model evaluation with minimal data use.
These future directions highlight the potential for CAT systems to become more sophisticated, insightful, and versatile through the integration of advanced machine learning techniques, paving the way for innovations in both educational and AI evaluation fields.