Much of this chapter centres around a conference called by a group of educational researchers in 1972. The conference was held at Churchill College, Cambridge and associated with this conference are two very important papers. The first was Evaluation as Illumination: A new Approach to the Study of Innovatory Programmes and was written by Malcolm Parlett and David Hamilton. The second was called Evaluation and the Control of Education by Barry MacDonald, now a professor at UEA and a key figure in both the Cambridge conference and education evaluation in the UK. Both papers are reproduced in Murphy and Torrance .
Programme evaluation began in the USA in the 1920s and 1930s when the US Federal government sought to measure whether the large sums of money it was investing in educational and aid programmes was being well spent. After World War 2 similar work was being conducted in Britain, albeit on a much smaller scale. By the early seventies there was broad agreement in both the US and the UK that there was a need for a new approach, indeed it was already emerging in both countries. The 1972 conference was intended to draw together these ideas and state them publicly.
It was the Parlett and Hamilton paper that reflected the greater context in which the changes were taking place. They called the existing mainstream evaluation of the time 'Traditional Evaluation' and described it as residing within what they called the Agricultural-Botany paradigm. It derived from the experimental and mental testing traditions in psychology. Its features were:
Parlett and Hamilton concluded:
"These points suggest that applying the agricultural-botany paradigm to the study of innovation is often a cumbersome and inadequate procedure. The evaluation falls short of its own tacit claims to be controlled, exact and unambiguous. Rarely, if ever, can educational programmes be subject to strict enough control to meet the design's requirements. Innovations, in particular, are vulnerable to manifold extraneous influences. Yet the traditional evaluator ignores these. He is restrained by the dictates of his paradigm to seek generalised findings along preordained lines. His definition of empirical reality is narrow. One effect of this is that it diverts attention away from questions of educational practice towards more centralised bureaucratic concerns."
They contrasted the traditional form of evaluation with what they called Illuminative Evaluation and the Social-Anthropology Paradigm. They argued that this form of evaluation:
They wrote that the aims of illuminative evaluation were:
"... to study the innovatory programme: how it operates; how it is influenced by the various school situations in which it is applied; what those most directly concerned regard as its advantages and disadvantages; and how students' intellectual tasks and academic experiences are most affected. It aims to discover and document what it is like to be participating in the scheme, whether as teacher or pupil; and, in addition, to discern and discuss the innovation's most significant features, recurring concomitants and critical processes. In short it seeks to illuminate a complex array of questions: ..."
They distinguished the Instructional System [the idealised specification of the scheme] from the Learning Milieu [the social-psychological and material environment] and studied both. They asserted that the objectives and performance criteria of the Instructional System were in practice commonly re-ordered, redefined, abandoned or forgotten. Within the Learning Milieu they recognised diversity, complexity and the secondary effects of the programme. The term progressive focusing is used in this respect. This means that as the evaluator becomes engaged with her programme or service, new issues emerge and some assume a greater importance than others. Even if emergent issues were not believed to be important at the outset it is these that the evaluator comes to concentrate upon.
Evaluations look different depending on the numbers of participants, level of co-operation, nature and stage of the innovation and so on. The first task of the evaluator is to familiarise herself with the day to day reality of the settings she is to encounter. There is no attempt to manipulate, control eliminate situational variables but to unravel the situation she encounters, discern significant features and so on. As there is great concern with the innovation as an integral part of the learning milieu, there is an emphasis on observation and interviewing. Use is made of documents, often for historical data and to gain an insight into the omnipresent micro-politics. Areas of past intensive discussion are clarified. Depending on the nature of the evaluation, questionnaires and test data may be used but they tend to provide superficial, poorly grounded data.
1. Observation is one of the main methods of collecting data and is especially useful during the early 'immersion' period. The investigator builds up a continuous record of events that might be considered on the fringe of the study, e.g. faculty and student meetings, lectures, programme meetings, examiners' and management meetings and so on. Recordings are made of discussions between participants. "The language conventions, slang, jargon, and metaphors that characterise conversations ... can reveal tacit assumptions, interpersonal relationships and status differentials."
But the evaluator has to be careful with observation for her interpretations may not accord with the meanings of the actors as they engage in the programme. There is need for triangulation or cross checking. It is often interviewing that is the evaluator's most useful tool in explaining what has been observed and to get close to the participants in the programme.
2. Interviewing offers many methodological possibilities. The aim is to uncover as much as possible of the interviewee's understanding, reasoning and biographical perspective. The evaluator seeking illuminative is more likely to use unstructured or semi structured interviewing at the beginning of an evaluation, being careful not to set the interviewee's agenda or have an undue influence on the responses. Decisions have to be made about recording. As the evaluator is concerned with great detail it is usual to find her using audio tapes at least for some interviews, depending on what support exists for transcription. There are a number of ethical issues that have to be observed - interviewees are usually offered confidentiality, personal respect and so on. The evaluator will feel free to negotiate access to interview any participant, including senior managers and programme directors and will also endeavour to negotiate any accounts she has received.
This then concludes the summary of Illuminative Evaluation. Turning now to the second paper by Barry MacDonald, it soon becomes clear to any evaluator that evaluation is a political activity. This is not mean in the party political sense necessarily but the evaluator usually finds the various parties and individuals in a programme or school, jostling for influence, position and power. In this way evaluation is always political. The evaluator has to uncover this power game and hold it up for discussion.
The great interest that MacDonald's paper holds is in his simple but clear classification and analysis of evaluation studies. The following statements were constructed with enormous attention to detail and need to be read carefully. They are, verbatim, as follows:
"Bureaucratic evaluation is an unconditional service to those government agencies who have major control over the allocation of educational resources. The evaluator accepts the values of those who hold office, and offers information which will help them to accomplish their policy objectives. He acts as a management consultant and his criterion of success is client satisfaction. His techniques of study must be credible to the policy makers and not lay them open to public criticism. He has no independence, no control over the use that is made of his information, and no court of appeal. The report is owned by the bureaucracy and lodged in its files. The key concepts of bureaucratic evaluation are 'service', 'utility' and 'efficiency'. Its key justificatory concept is 'the reality of power'."
"Autocratic evaluation is a conditional service to those government agencies who have major control over the allocation of educational resources. It offers external validation of policy in exchange for compliance with its recommendations. Its values are derived from the evaluator's perception of the constitutional and moral obligations of the bureaucracy. He focuses upon issues of educational merit, and acts as expert adviser. His technique of study must yield scientific proofs, because his power base is the academic research community. His contractual arrangements guarantee non-interference by the client, and he retains ownership of the study. His report is lodged in the files of the bureaucracy, but is also published in academic journals. If his recommendations are rejected, policy is not validated. His court of appeal is the research community, and higher levels in the bureaucracy. The key concepts of the autocratic evaluator are 'principle' and 'objectivity'. Its key justificatory concept is the 'responsibility of office'."
"Democratic evaluation is an information service to the whole community about the characteristics of an educational programme. Sponsorship of the evaluation study does not in itself confer a special claim upon this service. The democratic evaluator recognises value pluralism and seeks to represent a range of interests in his issue formulation. The basic value is an informed citizenry, and the evaluator acts as broker in exchanges of information between groups who want knowledge of each other. His techniques of data gathering and presentation must be accessible to non-specialist audiences. His main activity is the collection of definitions of, and reactions to, the programme. He offers confidentiality to informants and gives them control over his use of the information they provide. The report is non-recommendatory, and the evaluator has no concept of information misuse. The evaluator engages in periodic negotiation of his relationships with sponsors and programme participants. The criterion of success is the range of audiences served. The report aspires to 'best-seller' status. The key concepts of democratic evaluation are 'confidentiality', 'negotiation', and 'accessibility'. The justificatory concept is 'the right to know'."
MacDonald advocated democratic evaluation. If we allow the two papers described here to guide our approach to evaluation we are then in a position to decide how to actually carry out an evaluation. This is the subject of the next chapter.