Return to Enquiry Learning home
Return to Research Issues menu
Return to Research Resources Menu
METHODOLOGICAL APPENDIX TO THE ACE PROJECT
The ACE project was funded by the English National Board for Nursing, Midwifery and Health Visiting (ENB) and was directed by John Schostak and Terry Phillips, co-ordinated by Jill Robinson with Helen Bedford as the full-time research associate. Its purpose was to research the assessment of competence. It found that defining professional competence was anything but simple. Full details can be found in the report published by the ENB: Researching Professional Education. Education, Dialogue and assessment: creating partnership for improving practice, research report series No. 1, London. The following was placed as an appendix to the report prior to the edited version written for the research series. It is for those interested in methodology. Although not meant to be in any way a full treatise on methodology, it does describe some of the key issues facing the project team in trying to conceptualise the task of assessing competence, skills, knowledge appropriate for the education/training of nurses and midwives at a national level.
(This article remains unfinished - it is a working document)
The purpose of this appendix is to locate the conceptual framework underlying the methodology of the project within its particular heritage. Methodology is not a recipe for research practice. Rather it consists of a process of ruthless questioning which leaves behind it not so much an answer but an as yet unresolved question. Methodology is always incomplete in its search for knowledge about the world that can be acceptable as 'true' or at least 'plausible'. There is no single methodology, rather a contest between methodologies. As Kuhn (1970) pointed out, it is not so much that one approach proves to be true as that its adherents eventually die to be replaced by another generation of scientists who believe something different!
In broad terms, the methodology of this project is qualitative rather than quantitative, drawing upon phenomenological, structuralist and post-structuralist insights rather than positivist, employing semiotic and discourse forms of analysis rather than statistical. In Kuhn's terms the distinction between the approaches is paradigmatic.
The term paradigm was made popular by Kuhn (1970) who used it to convey the sense of a world view, a way of seeing the world, a way of explaining the world . For the scientist it refers to the sense of the way reality is structured and the means by which the scientist uncovers this reality and is able to manipulate it and predict effects and events. Typically, exemplary texts or experiments come to define a particular paradigm . Hence, in the social sciences one can talk of the quantitative paradigms and the qualitative paradigms as being distinct ways of doing science, each having their major exemplary thinkers and writers. In this sense, paradigm gains a particular meaning, referring to the kinds of text produced and used, and the ways in which practitioners in their discourses support, contest, formulate ideas and evaluate their methods . What cannot be said is that one approach is science, or is more scientific than the other without confusing and misusing the notion of paradigm.
More specifically, the methodology of this project accords with a tradition of democratic evaluation, employing qualitative methods. What follows is a discussion of methodological issues and questions that were debated throughout the life of the ACE project.
There is an intimate historical relationship between evaluation, curriculum development and assessment (Eisner 1993). In Britain as in America evaluation grew out of the curriculum development movement during the 1960s and 1970s. The Humanities Curriculum Project (HCP, 1968 - 1972 based at the University of East Anglia) directed by Lawrence Stenhouse provided the impetus in Britain for the teacher-as-researcher movement (Stenhouse 1975) and its evolution into the Action Research approach to professional development (Elliott 1991) and educational action (Schostak 1991). Barry MacDonald, the director of the evaluation of HCP, saw the purposes of evaluation as informing decision makers and the function of the evaluator as being the 'honest broker' of information as between the parties having vested interests in the outcomes of the evaluation. A distinction was made by MacDonald (1970, 1987) between those evaluations which limited their functions to the service of the sponsor (bureaucratic evaluations), made recommendations which on the authority of the evaluators demanded to be acted upon (authoritarian evaluation), and finally, those evaluations which sought to give equal weight to all views and informed all interested parties in order to facilitate the decision making of the participants at whatever level in the system (democratic evaluation). This latter approach adopts principles of procedure to guide the evaluators in order to protect the rights of participants.
There is a practical question which the evaluation methodology of the ACE project is designed to address:
How may an evaluation be undertaken which addresses the complex relationship between formal assessment procedures and the dynamic contexts within which a curriculum is developed and delivered, the contexts in which learning and professional development takes place and knowledge is both constructed and applied?
What characterises the situations under consideration here is the feature of 'multi-dimensionality'. When, for example, an assessment event in a given placement area is being observed it is not a simple interaction between two elements as in the case of two known chemicals being mixed in controlled circumstances to produce a particular outcome. Sayer (1993:3) describes and comments on the situation of a seminar event in these terms:
It involves far more than a discussion of some issues by a group of people: there is usually an economic relationship (the tutor is earning a living); students are also there to get a degree; their educational institution gets reproduced through the enactment of such events; relations of status, gender, age and perhaps race are confirmed or challenged in the way people talk, interrupt and defer to one another; and the participants are usually also engaged in 'self-presentation', trying to win respect or at least not to look stupid in the eyes of others. This multi-dimensionality is fairly typical of the objects of social science. The task of assessing the nature of each of the constituent processes without being able to isolate them experimentally throws a huge burden onto abstraction - the activity of identifying particular constituents and their effects. Though largely ignored or taken for granted in most texts on method I believe it to be central.
The example is directly relevant to a focus on the assessment of competence in Nursing and Midwifery. Competence is itself clearly a multi-dimensional phenomenon. This statement can be made even if 'competency' so eludes definition that it is suspected it has the status more of a 'unicorn' than a 'work horse'. Competence has a social existence because it is continually referred to and constructed in social discourses and in particular in professional discourse. For evaluation purposes it is the social function of 'competence' in the processes of assessment that are the focus of study.
When people talk about competence key dimensions in their definitions include knowledge, skills, learning, judgement, problem solving. None of these are simply products that can be evaluated independently of their processes of production and use in social contexts. Competence, what ever it may be, manifests itself in, or is identified as a feature of practice in the work context. Work, as Sayer (1993:3) puts it, is the 'manipulation of matter for human purposes'. As such it involves issues of transformation, that is, the purpose of professional action is to change something, or make something happen for some purpose. In doing this, something is learnt about the manipulability of the world and 'knowledge' on the one hand is generated and on the other either some degree of 'competence' to manage real world processes is recognised or failure is recorded. Competence then involves that dimension of knowledge that is often referred to as 'know how'. Know how is pragmatic knowledge in the sense that it is directed towards problem solving, where action hypotheses are generated that are tested in practice and the effects of the action monitored and evaluated. What is generated through such a cycle of activity is practice derived theory. In communicating the results of professional experience that experience is mediated by language. Only if there are shared or sharable categories by which to describe practical experience can competence itself become communicatable. There is then a second order level of knowledge, 'discursive knowledge' that arises in communicative interaction. Discursive knowledge provides a means of conceptualising experience at the level of 'know how'; the better the conceptualisation the better it can be communicated to enrich the knowledge, understanding and capabilities of a community of actors. This means that a change in conceptualisation leads to changes in action and the possibilities for manipulating and transforming the material world.
There are structural constraints on the possibilities for transformation and manipulation. These include the material, social and discursive (or language mediated) constraints upon practice. While it is possible to experiment with the structure of material objects in laboratory conditions, screening out extraneous variables, it is possible to experiment in these ways with social and discursive structures only in very limited ways. Nevertheless, it is possible to develop good structural analyses that do not depend upon laboratory conditions. In social and discursive contexts structure is manifested as an organising and constraining principle. Case study provides a means for the discovery of such structures, or structuration (Giddens 1984) in social practices and discourse.
The object of research and evaluation is complex, multi-dimensional and not susceptible to exhaustive coverage. While it is possible to increase competence in the handling of social phenomena, total certainty remains elusive. Case study methodology attends to the description of the structures which form the context of events, actions, interactions, the presentation of self and the manipulation and transformation of matter in the world. Semiotics and discourse analyses can provide a rigorous approach to the analysis of multi-dimensional data (cf. Manning 1987).
The validity, reliability and generalisation of the findings that result are achieved through the processes of data collection and progressive development of theoretical categories themselves. Glaser and Strauss (1967) have called this approach 'theoretical sampling' (which generates theory from data) to distinguish it from statistical sampling (which tests pre-determined theory against data). Statistical sampling speaks of variables, while the qualitative approach of Glaser and Strauss speaks of cases. However, as Ragin (Ragin and Becker 1992:2-5) points out, the distinction between a variable and a case blurs as soon as it is interrogated. Theory development that is grounded in direct reflection upon observation and accounts of experience of social and work practices requires a:
... process of data collection for generating theory whereby the analyst jointly collects, codes, and analyses his data and decides what data to collect next and where to find them, in order to develop his theory as it emerges. This process of data collection is controlled by the emerging theory, whether substantive or formal. The initial decisions for theoretical collection of data are based only on a general sociological perspective and on a general subject or problem area ...
(Glaser and Strauss: 45)
The strategy here is to start with as light a theoretical load as is possible. Then:
Beyond the decisions concerning initial collection of data, further collection cannot be planned in advance of the emerging theory (as is done carefully in research designed for verification and description). The emerging theory points to the next steps - the sociologist does not know them until he is guided by emerging gaps in his theory and by research questions suggested by previous answers.
The basic question in theoretical sampling (...) is: what groups or subgroups does one turn to next in data collection? And for what theoretical purpose? In short, how does the sociologist select multiple comparison groups? The possibilities of multiple comparisons are infinite, and so groups must be chosen according to theoretical criteria.
Multiple comparison groups are to be chosen for their theoretical relevance. Their purpose is to draw out to the fullest extent possible the features or qualities of the categories discovered from the data. It is argued that this theory is robust and valid since it is generated, not from speculation, but through a process of continual, systematic testing against experience throughout the period of the research (c.f. Strauss 1987).
The following is a summary of the key issues to be addressed if the methodology is to be appropriate to concerns discussed above. These are:
A. Issues related to social phenomena (foci of study)
- knowledge: the context of its production and application: the professional as an agent of change, transformation; the professional as subject to institutional, local and national changes
- agency: decisionmaking, judgement, prioritisation, valuation are all aspects of human agency (in terms of causative powers) to make change
- learning: reflection upon experience and the reconceptualisation of experience lead to changes in the social phenomena being studied
- multi-dimensionality: all social phenomena relate to a multiplicity of other social phenomena in a dynamic context of changing intensities of relationship
- structure: organisation can be abstractly described in terms of structural relations. The pervading structure constrains roles, events, behaviours, outcomes. In terms of possibilities for action and outcomes a change of structure opens some and closes others
- uncertainty: due to human agency, the multi-dimensionality of social phenomena and continuous system change neither certainty nor completeness in the analysis and evaluation of social processes and structures can be attained
B. Issues related to the processes of research
- analysis: categorisation, structure, process, action, event
Method, unfortunately has often been reduced to a set of rituals, or incantations to be applied to data. It is central to the approach adopted in this project that method itself is reflected upon at a conceptual level, not simply applied in a mechanistic way. In general terms, following Sayer and others, a broad distinction is made between 'thought objects' and 'real objects'. Thought objects refer to the ways in which objects in the world are conceptualised and referred to for communicational and practical purposes. The conceptual map that is woven organises sense or meaning which in turn organises the ways in which individuals respond to the world and order the objects of the world. Sayer (1993) sets out the relation diagrammatically in simple form as follows:
The thought objects are comprised of complexly interrelating categories which have elsewhere been called 'semantic maps' or 'conceptual maps' and employed in the development of learning strategies (Novak and Gowin 1984). A major ACE project research endeavour has been to identify and describe the alternative conceptual maps which underpin professional discourses on competence and its assessment. These maps then are related to the working practices of professionals and students which lead to material effects and products (the real objects 'o' of the above diagram).
There is an interconnection between conceptual maps, how behaviour is organised into work practices, and the material structures which underpin institutions. The interconnection is articulated in the discourses through which practices are accounted for and interpreted and material structures constructed and maintained. A particular conceptual map is tested, modified and persists through action in the world. Case studies focus on the ranges of variation and qualitative differences to be found within a given case and between cases in apparently similar situations and circumstances.
The central purpose of this methodology is to map out the interconnections between cases in ways which:
Institutions can be analysed in terms of the structures they exhibit. Examples of such structures include hierarchical, democratic, or vertical and lateral orderings of roles and the kinds of formal and informal power relations that occur between role holders. Institutions are also embedded in a multiplicity of structures of quite different kinds - legal, the structures of values (ethics) of professional bodies, market relations, cultural. So complex and various are the possible patterns that result it could be argued that no institution is comparable with another. Thus activities that take place within one context are not strictly comparable with activities that take place in another. This would place an impossible burden upon any research project charged with making evaluations. It also ignores the communicative functions of language and discourse. Conceptual maps are intersubjective in nature. That is, they are developed in interaction with others. Not only that, language structures pre-exist their contemporary usage. Individuals learn them, are socialised into them and are thus in an important sense the product of language and discourse communities. In this way, individual events, social practices and institutional forms make no sense without reference to the structures (whether material, social, cultural, legal, economic, political) in which they are embedded. It is possible therefore to develop comparisons by reference to a) relevant features of the embedding structures; b) problem and issues structures that particular institutions have in common in achieving common goals or aims; and, c) the range of experience and practice that result in differing solutions to particular demands, problems, aims. Such an approach does not aspire to a conformity of practice, rather it aspires to inform decision making across a range of situations.
Thus a 'point of reference' can be formulated by identifying the kinds of embedding structures, and problems institutions face with the associated ranges of experience and practice to create a 'reference class'. While no one institution will exhibit all the features indicated in the reference class, all institutions will share at least some features in common in attempting to resolve or meet issues in common. Sayer's diagram reproduced above can be modified as follows to illustrate the formation of categories in common across institutions:
Taking a simple illustration, data is collected to produce case study (case 1 etc) material referring to the real circumstances of institutions (i). This data is then analysed in terms of commonalities with other cases to produce categories in common (c 1 etc). One such category may for example be a particular approach to developing assessment documentation - say, the employment of the 'accredited witness' in signing pre-structured assessment categories. This category may be labelled 'c 1'. It can be seen that 'c 2' while having a feature in common with 'c 1' also includes a feature that is found in case 3. This feature may be that the 'accredited witness' approach includes also forms of 'triangulation' not found in 'c 1' such as negotiation with other colleagues and the student, or the collection of evidence which can be inspected by a third party that allows an independent re-assessment of the judgement made by the accredited witness. The analysis of complex relations between categories could be continued so that comparisons were made which allow the development of debates concerning how improvements could be made in particular situations. Analysis of institutionally derived data thus enables the development of a reference class of categories (c 1, c 2, .....c n). This taxonomy of categories and their interrelations is not sufficient by itself. In addition, the discourses that employ these categories to make sense of experience and account for everyday practices constitute a reference class of discourses and associated conceptual structures through which institutional structures and associated mechanisms, procedures and events are reproduced and developed:
In general terms, fieldwork carried out in each institution produces contextualised data on key issues of structure and process. Some key features relating to the experiences of each institution can be explored for comparative analysis to the extent that they address the same kinds of problems in the development and delivery of assessment although their solutions may vary, and the extent to which they recognize and address the problems may vary. Nevertheless, if a certain range of problems are integral to the delivery of assessment and if the institution does not create the appropriate structures (formally or informally) to address these problems then the assessment will not fulfil its function as intended. Thus, in more general terms, for any institution, when an appropriate structure together with the rules, mechanisms and procedures of operation does not exist then nothing can be expected to happen. Similarly, if the system is 'faulty' or if a key structure, rule or mechanism is missing then things will not happen in ways that are intended or expected. The relationship between structures, mechanisms, procedures and events can be set out diagrammatically as follows :
The division 'abstract'-'concrete' corresponds to the earlier distinctions between thought objects and real objects. There is a sense of subtle continuity implicit in this formulation in that abstract conceptions have effects at the level of the concrete and vice versa. However, the relation is not causal in the mechanical sense. The reason for this becomes clear when describing the move from level to level in the diagram. It is relatively easy, for example, to describe the structure of an educational institution, its hierarchies of management that mesh uneasily with its hierarchies of academic 'authority', identifying roles and relationships that are relatively stable over time. A particular role is no more than a location in a formal structure having no bearing upon action in the world unless it is associated with mechanisms to make things happen. A role is defined by certain ways of acting. A teacher has certain ways of acting, of making things happen. At their most general these 'ways of acting to make things happen' can be thought of as mechanisms. The individual, having internalised the modes of behaving associated with that of the teacher, acts as is expected. A mechanism can be thought of as a process set in motion to manipulate or transform the social/material environment. These processes of manipulation and transformation can be executed in relatively fixed sequences. Such sequences may be of vital importance for a health professional to perform to save a life, or a police officer to undertake if certain evidence is to be deemed legally admissible in court. Such sequences are procedures, directed at well defined outcomes. A mechanism may thus be comprised of a set of strategies, or a finite set of rules to get from A to B. These in turn can be articulated as fairly precise sets of procedures. A particular procedure chosen may depend upon its appropriateness to a given situation. Other kinds of procedures may be less well defined, less well ordered into sequences but nevertheless recognisable as falling into typical patterns. The outcomes of a mechanism and its associated procedures are the events in the real world of objects as indicated in the diagram. Although, the purpose of mechanisms and procedures is to make a given event likely, certainty is not guarantied. Although a mechanism and its associated procedures are causative (or reduce the array of alternative options for variation and deviation), the relation between mechanism and event is not a simple case of cause and effect.
The pathway from structure to event is not straight and unambiguous. The same mechanism in different conditions can produce a different effect. In social life social conditions are continually changing. An institution is not governed by a single commonly agreed set of values, nor directed by a single common vision much though public relations officers would wish it. There are always competing agendas, multiple visions jostling for attention, for acceptance, and there are multiple personal agendas formed through vested interests, ambitions and anxieties. In a simple model of cause and effect an unambiguous path can be traced from the structure 'S 1' to a particular address, 'E1'. Unlike the striking of a billiard cue onto a billiard ball which sends it to a particular pocket of the table, or an atom being made to strike another atom for a particular effect, roles, mechanisms, procedures are all open to interpretation, negotiation and re-definition. It is as if the billiard ball crossed a slick of oil causing it to skid, swerve, slide. The slick of oil in this case is language or discourse where meanings jostle, de-construct, form unpredictable associations. Individuals in institutions can generate countervailing procedures to subvert overt procedures and manage impressions of cooperation while engaging in hostilities.
Given then the ever present possibility for multiple interpretation, a given mechanism, M 1, may give rise to three equally plausible interpretations for translation into procedures, P1, P2, and P3. Each of these may give rise to different events, E1, E2 and E5. It cannot be predicted therefore at the level of structure or mechanism what particular event will arise as outcomes. Specific procedures may increase the likelihood of generating particular desired events. Even here, however, a procedure applied in one circumstance may not produce the same effect in a different circumstance. While mechanisms and procedures increase control in social situations, they do not guarantee it. In very complex environments, where outcomes are not easily defined and controlled for in mechanistic terms - such as the demand for expert judgement - inputs cannot be directly related or meaningfully correlated to outputs. There can be no substitute for a detailed analysis of issue and problem structures in their concrete contexts.
Drawing on the above discussions, the approach adopted here provides four dimensions of the analysis of issue and problem structures in concrete contexts, which may be summarised as:
The four dimensions listed above interact and there is no implication of priority in the order of the list. In analysing the experience of institutions in planning and implementing their assessment structures and processes the four dimensions indicate both the range of data that need to be collected and the kind of structure that is required for planning and designing devolved continuous assessment. It provides a framework therefore for the development of a reference class to facilitate comparative analysis.
General research questions are employed to indicate the kinds of data/evidence required for a given study. In relation to the ACE project, general questions can be formed relating to each of the methodological dimensions described above: structure (conceptual and material), mechanisms, procedures, events. There is an inevitable interplay of dimensions in constructing appropriate questions, with certain questions stressing or exploring a particular dimension more than others. Examples are:
There is of course an intimate relationship between the questions that govern or orientate research and the kinds of data that arise. This in a sense means that data are already categorised, albeit at very high levels of methodological generality. This needs to be borne in mind when making more particular categorisations of the substantive data. It has already been stated that the data in question is multi-dimensional. The methodology has been constructed to enable multi-dimensional analysis not to presuppose the directions that this should take. This means that the procedures for classification employed should not presuppose uni-dimensionality. Typically, methodology in both quantitative and qualitative uncritically adopts one dimensional forms of classification in its search for typicality or generalisation. While in certain contexts and with particular purposes in mind, it provides a powerful approach, as Needham (1983) has pointed out, it can also be very misleading in its analysis of real events in concrete contexts.
A major task of the field work is to identify and describe the general structures within which the processes and practices associated with assessment take place. To do this a method of grouping data has to be applied. More particularly, systematic classification has to take place otherwise the research will be drowned in detailed 'bits' of data.
All research must find ways in which to classify systematically its data. Qualitative research is no different from quantitative research in this respect. It does however, handle the formation of 'classes' or 'categories' differently. A particularly useful discussion of classificatory procedures as they apply in the anthropological study of social forms is to be found in Needham (1983). He draws a distinction between 'monothetic' and polythetic' forms of classification which will be helpful in the present study of complex institutions and their social practices. Broadly, monothetic classification is easily recognisable as the form typically required for statistical analysis. Here each member of a given class has the identical distinguishing features of every other member of that class. Schematically this could be represented as:
Each member of class 1 has features a, b, c in common. Likewise, each member of class 2 has features x, y, z in common. Membership of a class means that any one member is substitutable for any other member. In short, classes are strictly homogeneous. Thus the implication, as Cohen (1944:134-5) pointed out is that:
In the end, the truth of a generalisation from a sample depends on the homogeneity of the group with respect to which we wish to generalise. A single experiment on a new substance, to test whether it is acid or alkaline, is much more convincing than the result of a questionnaire addressed to millions of army men to measure their intelligence. For the latter is not a simple quality of a uniformly repeatable pattern. In this respect the methods of social statistics are gross compared with refined analysis, so that when our analysis is thoroughgoing, as it generally is in physics, one or two samples are as good as a million. If what we are measuring is really homogeneous, one is sufficient. In the social field, therefore, statistics cannot take the place of analysis ....
In this relatively early statement of the problem from the point of view of a philosopher of logic the issue is already clearly posed that the elements or 'units' of social analysis are not as readily amenable to the logic appropriate to statistical analysis as are the data of physics. Many reasons can be adduced for this. As Schutz (1976) pointed out, atoms do not make decisions or judgements about how or whether to act or not. Additional factors are due to the great complexity of human institutions, action and culture that result in no one individual or social unit being identical to another. Social facts are not stable in the sense of being capable of being rendered 'pure' or refined substance like sulphuric acid. Rather than relations of identity (involving a one to one matching of features) there are relations of similarity (where a judgement call is made as to whether one member is sufficiently like another to be placed in the same class). The concept of polythetic classification for which Needham drew upon the results of studies not only in his own field of anthropology but also in the areas of botany and zoology seem to approximate the requirements for the study of social forms.
Polythetic classification recognises that not all members of a class have identical features in common with all other members of a class. Rather than a relation of identity what is called on here is a relation of 'similarity' where some but not all features or structural properties are shared. It seems to be particularly useful therefore in the analysis of dynamic systems that:
A polythetic classification or taxonomy may be described as:
Although not meant to be a formal description, the schema does reveal certain typical features. In class 1 for example the member with features a, b,c has nothing in common with the member with features d,e,f. However, all members of the class have at least some features in common with some other members of the class. The pattern of relationship is complex. The first has features in common with the second and the second has features in common with the third and the third with the forth. If one is to make a judgement as to the boundaries of inclusion and exclusion for such a set of related members where should it fall? Should the first or the forth in that class be excluded or included? If included, they may register the accepted poles of extreme variation within the class of complexly related members. Judgement is necessary and a rationale for inclusion and exclusion would always be necessary.
Data classification is not an end in itself. Nor does it constitute 'findings'. Methodology does not lead mechanically to a conclusion about anything. What it does is to refine judgement and the process of interpretation necessary to generate a conceptual framework, a model, a set of recommendations, a better question - and so on.
The purpose of the ACE project's methodology was firstly, as it were, to draw the best map possible within its time and resource limitations, of current conceptions of the assessment of competence and the structures through which current practices are organised. Secondly, it aimed to make judgements about the fit between assessment methods and what they were meant to assess: that is, competence. Finally, it aimed to draw up recommendations that could contribute to the processes of decision making concerning the development and implementation of assessment strategies.
In this way it has taken seriously the methodological approach of identifying the structures, mechanisms and procedures that are required to realise the conceptual frameworks of the educational and assessment processes for the development of professionality.
To contact author:John Schostak
Return to main page.