
Automated Interview for Management Style
In 1993, the National Westminster Bank plc commissioned the development of a recruitment system that would identify the ‘new’ bank manager for tomorrow’s world. The resultant instrument, the Automated Interview for Management Style (AIMS) was an at the time a state-of-the-art Computer Adaptive Test (CAT).
The test specification was based on a series of interviews with 35 managers and senior managers within the bank. The interview was designed to elicit those behaviours and attitudes that, in their experience, had proved to be crucial for success. Their responses were then collated into categories that formed the basis of a pilot version. This was administered to over 70 existing managers. Item analysis of the results led to the reduction of the item pool to 24 questions. However, another innovation added was a branching structure for responses – hence in many ways AIMS was as much an expert system as a psychometric test.
The resultant instructions, rules, scoring algorithms and report structure were incorporated into a computer program written in Fortran. AIMS remained in use by Nat West for many years and was administered to thousands of managerial candidates.
Development of AIMS
This project marked an important step in my use of Artificial Intelligence. The AI field at that time was very underdeveloped. Yet while the focus continued to be on the ability of computer systems to perform on the Turing Test, the two underlying approaches of Expert Systems and Neural Networks were both very much in their infancy. Expert Stystems were seen as much more immediate. Serial programming in any computer language would enable a series of logical gates (AND, OR, XOR) to simulate the decision making of a human expert such as a Medical Consultant, an Accountant or indeed any other high level professional in the knowledge and consultancy industry. When these were combined with a database containg a library of relevant knowledege, even quite advanced professional services could be carried out by a machine, and systems in the medical profession were already well advanced.
But within psychometrics, the equivalent problems that later enabled Computer Adaptive Testing were still struggling with the branching problem. How could you compare the scores of two individuals who had not answered exactly the same sets of items? The ideas were there – Rasch Scaling was already being applied in, for example, the development of the British Ability Scales – but computers were still too slow to allow parallel processing. Expert systems such as AIMS were worth a try. They represented an early foray into integrating artificial intelligence with psychometric assessments. While this did not involve advanced AI techniques such as neural networks, which were still in their formative stages, it was an ambitious use of the technology available at the time. The system adapted its line of questioning based on the responses given, aiming to focus the assessment more acutely on the candidate’s strengths and weaknesses in relation to management styles.
However, it’s important to recognize the limitations and challenges faced by AIMS and similar systems of that era. Computer adaptive techniques were indeed in their infancy, and the computational power required to fully realize their potential was not yet fully accessible. The branching logic implemented in AIMS, though innovative, was a preliminary approach and lacked the sophistication of later developments that benefited from more advanced computational technologies and deeper integration of AI.
The significance of AIMS lay in its early adoption of AI principles in an attempt to enhance the efficiency and relevance of psychometric testing. While it may not have achieved the complexities possible with today’s technologies, it certainly set a precedent and provided valuable insights into the potential applications of AI in psychometric evaluations. This early experiment highlighted both the possibilities and the substantial hurdles to creating truly adaptive testing environments.
Today, with the advent of powerful neural networks and General AI, significant advancements in psychometric assessments have been achieved. These modern systems are capable of handling larger datasets and more complex adaptive algorithms, far beyond what was possible at the time of AIMS. The evolution from expert systems like AIMS to today’s AI-driven assessments illustrates a natural progression in technological adoption and sophistication within the field of psychometrics.
In retrospect, AIMS was a stepping stone that helped bridge the gap between traditional assessments and the future possibilities offered by AI. While it was a product of its time with inherent limitations, its development was a noteworthy effort that contributed to the ongoing conversation and exploration of AI applications in psychometric testing.