The Cognitive Abilities Profile

views updated

The Cognitive Abilities Profile

Ruth M. Deutsch and Michelle Mohammed

This chapter introduces the cognitive abilities profile (CAP) in terms of its theoretical underpinnings, practical applications and research foundation, including pilot studies of inter-rater reliability, user-friendliness and the training needs of CAP users. It aims to be a versatile tool for gathering information, summarizing and analyzing data, monitoring progress, identifying the next steps of learning, and helping to generate hypotheses about a pupil's difficulties. Examples and case studies illustrating a variety of applications are given, together with suggestions for further development.

RATIONALE FOR THE DEVELOPMENT OF THE COGNITIVE ABILITIES PROFILE

Initial work on the cognitive abilities profile (CAP) started in 2002, in response to feedback from experienced U.K. educational psychologists studying dynamic assessment in their continuous professional development (CPD) training and follow-up studies of the use of dynamic assessment (DA) to assess practice in the United Kingdom (Deutsch & Reynolds, 2000). Feedback from educational psychologists consistently reported high interest in dynamic assessment and its potential benefits for identifying appropriate intervention for the learner. However, some consistently identified challenges were perceived as barriers to dynamic assessment becoming a mainstream tool for applied psychology.

  • Time Factors. Increasing demand on psychologists' time with growing caseloads resulted in limited use of DA batteries such as the learning propensity assessment device (LPAD) of Feuerstein and his colleagues (1995; 2002), younger years DA tests (Tzuriel, 2001), and Lidz's (2003) application of cognitive functions scale (ACFS), which are the main DA batteries available to psychologists in CPD training. Psychologists reported insufficient time allocated to assessing individual learners with a comprehensive DA to screen for areas of cognitive strength and difficulty, let alone to profile modifiability over several sessions, as is recommended in the LPAD model. This was regarded by many psychologists as a barrier to greater use.
  • Training factors. To have a sufficiently thorough working knowledge of the cognitive functions assessed within this type of dynamic assessment, a great deal of training and experience is required. This means that only those who attended longer courses were able to feel they could confidently use dynamic assessment tools in their practice in the field. Even with such training, individual practitioners often felt isolated and reported being unclear how to interpret their findings once they had carried out the DA. The issue of interpretation appears to be of much greater concern in DA than when using psychometric tests. When administering a static test, accountability for the design of the test lies with the test developer and since no changes to procedure are permissible, the psychologist is “protected” from challenges to their method of test administration. In DA, as the assessor changes the test, they take responsibility for any adaptations they make and need to account for the rationale for their interventions.
  • Interpretation and application for classroom teaching. One of the greatest identified challenges was being able to take the findings of a DA and then apply these to the curriculum and the classroom. Many psychologists found it challenging to transfer their understanding of cognitive functions to subject domains. This challenge carried over to bridging the gap between the psychologist's findings and the teacher's use of that knowledge and how to mediate for improvement in cognitive functioning in the classroom. Emphasis on achievement in National Curriculum subjects has increased the pressure in schools to focus on outcome and scores, seen by many as being at the expense of learning how to learn. More recent initiatives have begun to address this issue and encourage formative assessment of learning processes (Black & William, 1998; Assessment Reform Group, 1999). Despite various governmental reports on the importance of teaching thinking skills (McGuiness, 1999) and the inclusion of teaching of thinking skills and critical judgment as part of National Curriculum Guidelines (2001), these stated goals are rarely supported in initial teacher training.

To address these identified challenges, the current methods of assessment and typical working practices and future trends need to be considered. Educational psychology working practice in the United Kingdom increasingly includes these methods, as opposed to individual or group assessment:

  • Observation of the learner within their learning environment in order to gain more understanding of the context of teaching and learning.
  • The use of a consultation model (Wagner, 2000), where the adults working with the learner are consulted and together areas of strength and difficulty are identified, and a problem-solving model (Monsen, Graham, Frederickson, & Cameron, 1998) is used to identify the next steps in intervention, usually in the form of an individual education plan (IEP).
  • There has also been a rise in the use of solution-focused psychology (Amjal & Rees, 2001).

Therefore, the cognitive abilities profile was designed as a way of introducing the concepts and methods of DA into mainstream psychology practice, in order to gain its benefits and overcome some of the perceived barriers to more frequent use. The CAP is therefore applicable in a variety of contexts:

  • Observing children and young people within their typical learning context.
  • Interviewing and consulting with teachers, teaching assistants, parents and other such adults working with the learner.
  • Bringing DA methodology into formative assessment in main-stream classrooms (Assessment Reform Group, 1999).
  • Assisting teachers by means of a cognitive approach to differentiation, thus meeting the needs of inclusive classrooms, reflecting the shift in the United Kingdom away from special education placements toward more diverse mainstream practice, catering for a wide range of learning styles and needs.
  • Catering for learners with increasingly complex learning and/or emotional behavioral difficulties in specialist settings.
  • Providing a distinctive tool that brings together observations of the learner, the teaching methods and the task as a whole (as in the tripartite model described in the next section).
  • Profiling the results of an individual assessment carried out by a psychologist or specialist teacher (either as a one-to-one or small group assessment).

THE STRUCTURE OF THE CAP

The theories and concepts that influenced the development of the CAP both directly and indirectly include:

  • The theoretical concepts of Vygotsky (1978; 1986) and Luria (1973; 1980).
  • Theory and test batteries, the learning propensity assessment device of Feuerstein (1979), Feuerstein, Feuerstein, Falik, & Rand (2002) (Feuerstein, Klein, and Tannenbaum, 1991).
  • The work of Lidz (1991; 2003), Haywood and Lidz (2007), and Tzuriel and Haywood (1992).1

Feuerstein's LPAD and other dynamic assessment tests derived from his model (for example, Tzuriel, 2001) are based upon the three elements of the Tripartite Learning Partnership (Figure 9.1), the student, the mediator and the task. It is the interrelationship and transactional quality of

these three constructs and their deliberate manipulation by the assessor, which gives this model of assessment its dynamic properties. When the task, teacher and learner are all of equal significance and are equally subject to intervention and analysis, the risk of making judgments about the abilities of the learner based on partial information is avoided.

The CAP is based on the tripartite learning model and each section is designed to assess one of its components. Whilst the conceptual framework of the CAP is based on this existing model, the distinctive role of the CAP is in bringing together these components into one profile, not limited to comprehensive formal testing but for context-based observation and consultation.

THE CAP AS A TOOL FOR COGNITIVE CHANGE

The aim of the CAP is to measure and inform cognitive change in the learner. It shares the same fundamental goal of dynamic assessment based on Feuerstein's theory of mediated learning experience (MLE). That is, the CAP identifies not only the cognitive needs of the learner, but uses the central role of the mediating adult as the key agent to bring about cognitive change in the learner (Vygotsky, 1978; 1986; Feuerstein, Feuerstein, Falik, & Rand 2003).

According to the tripartite learning partnership model, in order to bring about cognitive changes in the learner, changes in all three elements are necessary. The profile not only identifies the areas of cognitive strengths and difficulty, but also, using a solution-focused approach, negotiates methods of cognitive development and remediation through teaching and task-setting. Change is measured over time using the CAP's rating scales for all three elements, following baseline profiling of the learner's performance. In contrast to other tools, the CAP does not simply measure the learner as benchmarking of progress but the process of profiling with a teacher or key adult is an active part of the intervention. Such solution-focused profiling can increase insight and metacognition for teaching, and in turn lead to the use of more meta-cognitive strategies with the learner. Here assessment and intervention meet as an agent of change.

THE CONTENTS OF THE CAP

The CAP has three main sections (A + B + C) each of which has a rating scale that allows observation and scoring of the three elements of the tripartite model.

  • Section A—The cognitive abilities of the learner. Adapted from the Deficient Cognitive Functions identified by Feuerstein, and Luria's model of mental processes, this section allows rating of cognitive processes within learning activities as developmental abilities.
  • Section B—Response to teaching and mediation. Based on the Mediated Learning Rating Scales of Lidz (1991; 2003;2007), this section focuses on teacher behaviors which may or may not elicit certain responses from the learner.
  • Section C—Analysis of the task. Based on Feuerstein's Cognitive Map for task analysis (LPAD Manual, 1995), this section is composed of situation specific variables some of which are descriptive, others of which can be rated.

Section A

The cognitive abilities described in Section A are thinking skills required for effective learning to take place. Each ability is rated according to a four-point rating scale; see Table 9.1 on the meaning carried by each score.

TABLE 9.1 Four-point scale and corresponding levels of ability.
ScoreLevel of ability
NNot observed/Not applicable
1Unable even with support
2Able only with support
3Sometimes able independently
4Consistently and independently able

Section A considers the question “What are the learner's cognitive strengths and difficulties?” The results of the scoring can then help to prioritize areas of strength and difficulty, through considering the items with the highest and lowest scores. Ratings are made using the professional judgment and knowledge of typical child development on the part of the rater and in relation to class and peer group expectations. The issue of the need for prior knowledge in completing the profile is considered in the research studies described in later sections. The ratings are not scores as would be given in a normative or psychometric assessment. Section A cognitive abilities are grouped under functional domains described by Luria and not in the three phases (Input, Elaboration, and Output) used by Feuerstein. The decision not to rate cognitive abilities in these three phases was taken after several research studies showed that inter-rater reliability and clarity were low when the three-phase model was applied for observation without intervention. Phase analysis therefore is optional in the CAP, but the trained assessor would be encouraged to use it wherever possible, when reporting the results of a full dynamic assessment. Each cognitive ability is framed as a question, to direct the observation and solution-focused consultation (Table 9.2).

Section A cognitive abilities are grouped under five subsections:

  1. Perception and attention
  2. Logical reasoning and metacognition
  3. Memory (processing information)
  4. Language and communication
  5. Learning habits and behaviors

The structure of Section A reflects our understanding of cognitive abilities as a holistic concept incorporating the interdependent intellective and affective variables.

Section B

Section B is based on Lidz's Guidelines for Observing Teacher Interactions (2003). However, the CAP's rating scale measures the learner's response to teaching strategies, rather than rating the teacher's performance. Therefore, the assessment question shifts from “How mediational is the approach of the teacher?” to “Which teaching strategies most enable the learner?” This combines the models of MLE-based dynamic assessment with the solution-focused approach, sampling for positive interactions to inform further intervention. This shift also avoids the risk of the teacher feeling judged or evaluated by an outside agency.

The items of rating scale B have been grouped under subheadings which include Formative Assessment targets:

  • Sharing the learning objective and purpose
  • Using different teaching styles
  • Developing selective attention
  • Differentiation of the task
  • Supporting memory and retrieval


    TABLE 9.2 Extract from section A: The cognitive abilities of the learner.
    Cognitive AbilityAssessment QuestionScore (Circle)Evidence/Source
    Perception and attention
    PA1Regulation of attentionHow well can the learner regulate their attention and focus and filter out distractions?N 1 2 3 4
    PA2Clearly perceiving visual informationHow well can the learner effectively gather visual information?N 1 2 3 4
    PA3Clearly perceiving auditory informationHow well can the learner effectively gather auditory information?N 1 2 3 4
    PA4Clearly perceiving kinesthetic informationHow well can the learner effectively gather kinesthetic information?N 1 2 3 4
    PA5Perceiving and attending to spatial relationshipsHow well does the learner understand and use spatial relationships?N 1 2 3 4
    PA6Perceiving and attending to temporal relationships (sequencing)How well does the learner understand and use temporal relationships?N 1 2 3 4
    PA7Noting more than one source of information at onceHow easily can the learner consider more than one source of information at a time?N 1 2 3 4



    TABLE 9.2 Extract from section A: The cognitive abilities of the learner.
    Cognitive AbilityAssessment QuestionScore (Circle)Evidence/Source
    Logical reasoning and metacognition
    LR1Understanding what to doDoes the learner understand what they
    have to do when presented with a problem
    or task?
    N 1 2 3 4
    LR2Selecting what is relevant to the taskIs the learner able to distinguish what is
    relevant and irrelevant to the task?
    N 1 2 3 4
    LR3Comparing items and conceptsHow well can the learner compare two or
    more things in a systematic way?
    N 1 2 3 4
    LR4Classifying and groupingHow well can the learner put things into
    classes, sets, or groups?
    N 1 2 3 4



    Table 9.3 Section B rating scale.
    ScoreLevel of response
    NNot observed/Not applicable
    1The learner does not respond to this strategy
    2The learner responds a little to this strategy
    3The learner sometimes responds well to this strategy
    4The learner responds very positively to this strategy
  • Developing logical reasoning
  • Feedback for developing insight (metacognition), including self-assessment

While the rating of N and a four-point scale is used in all three sections, Section A measures abilities and Section B measures response to teacher strategies (Table 9.3).

During the course of a classroom observation, a number of possible teaching strategies may not be observable. However, the opportunity to consult with the teacher or other adult enables identification of teaching strategies with the teaching adult and encourages insight and metacognitive reflection, one of the CAP's major aims.

One of the reasons that the CAP is designed to be completed by a psychologist or specialist teacher acting as consultant, rather than by the classroom teacher or assistant alone, is that the teacher or assistant can-not be both observer and observed at the same time. The consultant is slightly removed and can facilitate reflective space for the practitioner.

Another advantage of using consultation with the teacher in CAP profiling is that it allows for more flexibility in secondary school assessment, where the learner is taught by different teachers in subject specific domains. Joint consultations can be arranged or the involvement of a teaching assistant who supports the learner across a number of subject areas. In consultation with a primary school teacher, it is often possible to discuss the use of strategies across many areas of the curriculum (Table 9.4), building on the primary school teacher's knowledge of the child across more varied learning experiences.

TABLE 9.4 Extract from section B: Response to teaching and mediation.
Teaching StrategiesResponse LevelEvidence/Source
Using different teaching styles
B5The adult uses visual props for the lesson/taskN 1 2 3 4
B6The adult uses auditory props such as use of voice, volume, rhythm (clapping or tapping) to engage the learnerN 1 2 3 4
B7The adult uses kinesthetic props such as gesture and movement to liven the impact of the lesson or taskN 1 2 3 4
Selective attention
B8The adult deliberately points out the important/relevant aspects of the task or lessonN 1 2 3 4
B9The adult labels the elements of the lesson/taskN 1 2 3 4
B10The adult gives the reason for selection and prioritization (why is this particular feature important?)N 1 2 3 4

Section C

Section C is based on the Cognitive Map and assesses the task elements shown in Table 9.5. Section C differs from A and B in that some items are descriptive and are not rated.

An additional table (Table 9.6) is provided enabling comparison of several tasks, either within one classroom session or across different contexts.

Section D: Summary Profile

At the end of each Section A to C, agreement can be reached between the consultant profiler and key adult(s) as to which rated items will

TABLE 9.5 Extract from section C: Task analysis.
Context/task analysisAssessment QuestionDescription/Score (see scoring guide)Evidence/Source
C1Content/subject area (describe)What subject or topic was the task about?
C2Familiarity with content and vocabularyHow familiar or novel was the content and vocabulary of the task?N 1 2 3 4
C3Mode of presentation (describe)In which mode(s) was the task presented?
C4Mode of response (describe)In which mode(s) was the learner expected to respond?
C5ComplexityHow complex was the task? How much information needed to be processed?N 1 2 3 4
C6AbstractionHow abstract was the task?N 1 2 3 4
TABLE 9.6 Comparison across different tasks.
AnalysisTASK 2TASK 3TASK 4
C1Content/subject area (describe)
C2Familiarity with content and vocabulary (score)
C3Mode of presentation (describe)
C4Mode of response (describe)
C5Complexity (score)
C6Abstraction (score)
C7Speed required (score)
C8Accuracy required (score)
TABLE 9.7a Summary profile (priorities).
SectionArea of StrengthArea of Difficulty
A Cognitive abilitiesHighest scoring cognitive abilitiesLowest scoring cognitive abilities
i)i)
ii)ii)
B Mediation techniquesStrategies to which the learner is most responsiveStrategies to which the learner is least responsive
i)i)
ii)ii)
C Task analysisAspects of the task which lead to successAspects of the task which lead to difficulty
i)i)
ii)ii)

be given priority in the learner's Intervention Plan. For each section, emphasis is placed on identification of strength and difficulty, illustrated in Table 9.7a.

Moving from Assessment to Intervention

Once Section D has been completed, the highlighted information, the priority strengths and difficulties, are then entered in the intervention plan for the learner (Table 9.7b), and become the targets for cognitive change. The criterion of change is the gain of an agreed additional point or half-point, depending on the expected rate of progress for the learner, to be measured using the CAP rating scale when the plan is reviewed. The plan can either be reviewed on its own, or as part of a follow-up profile, at a later stage.

The intervention plan/IEP is not intended to record only the CAP results. At this stage the results of the consultation should lead to the specific cognitive targets identified in the CAP being integrated into the learner's IEP. Here the CAP emphasizes a shift from the traditional

TABLE 9.7b Intervention plan/individual education plan.
NAMEDATE OF PLANREVIEW DATE
Area of difficulty to be targetedTarget setStrategies for interventionOutcome/evaluation
Taken from Section ACurrent CAP scoreSet the criteria for an increased scoreTake from Section B (strategies to which the learner is most responsive) and Section C (aspects of the task which lead to success)Date target was achievedNew score and observations

content-focused IEP to a more process-oriented model, reflecting that successful learning outcomes result from a combination of both content and process teaching. While teaching learning processes is vital in this approach, transfer and generalization are most effective when combined and elaborated within specific content (Brooks and Haywood, 2003; Cèbe and Paour, 2000).

THE SCORING GUIDE OF THE CAP

To explain the CAP system and aid CAP users in making judgments when scoring cognitive abilities, responses to teaching and task components, the Scoring Guide is an instruction manual providing:

  1. A step-by-step guide to completing Section A, with precise definitions of each cognitive ability, examples of how to score the different levels of ability, and classroom scenarios to illustrate how the abilities may appear in context. Lack of agreement in exactly what the cognitive function or ability means and how it would appear in the classroom, was a frequent issue raised by pilot users and led to the need to develop the guide to identification and rating of each item (see studies on inter-rater reliability).
  2. A guide to Section B with examples of teacher (mediator) behaviors and strategies serves both for observation and rating, but also as a reference for teachers to reflect on their own use of mediational strategies. Practical and simple explanations of mediational teaching techniques are of prime importance in guiding the teacher toward process intervention, in order to adopt a mediational teaching style (Haywood, 1993; Deutsch, 2003), which for some teachers may present a novel and challenging way of refocusing their practices.
  3. A guide to Section C with explanations and definitions of the task elements together with some classroom examples.
  4. Instructions for Section D, in bringing the information from Sections A, B, and C together to develop the learner's intervention plan.
  5. Instructions for completing follow-up profiles when reviewing progress over time. Separate record forms are provided for the initial and follow-up profiles.
  6. Worked examples of completed profiles are provided with brief case studies illustrating the use of the CAP with learners of different ages and abilities, and for different purposes.
  7. A guide to interpretation of the profile presents principles, guide-lines, and methods for interpreting observations and formulating hypotheses about the nature of the cognitive functioning of the learner. Each cognitive ability is analyzed to reflect a number of hypotheses that may be relevant when there are low scores for that ability. Within each identified ability, possible related difficulties and abilities are suggested. Also, analysis of scores across sections is discussed, where patterns of abilities, difficulties, response to teaching techniques and task elements are related to one another. Common patterns of abilities and difficulties that may be associated with certain conditions are provided, for example, typical patterns for learners on the autistic spectrum, or for those with literacy difficulties (dyslexia). The user is cautioned that this does not provide standardized data or cluster scores and when interpreting information, that the CAP is not designed for stand-alone diagnostic use. Instead it is to form part of a whole range of assessment information that may contribute to diagnosis of certain clinical conditions, if appropriate.

RESEARCH STUDIES OF THE CAP

The CAP has been subjected to ongoing trials and research and is being continuously developed in response to user feedback.

  • Focus groups of teachers trained in and using Instrumental Enrichment (Feuerstein, Rand, Hoffman, & Miller, 1980) in South Lanarkshire, Scotland, trialed the CAP to review the progress of their students.
  • Focus groups and interviews of educational psychologists (EPs) piloted the use of the CAP following a dynamic assessment training.
  • Questionnaires and interviews with EPs in the London Borough of Haringey, piloted the use of the CAP.
  • Research on the use of the CAP with teaching assistants by an educational psychologist in training at the University of East London and Hackney Educational Psychology Service.
  • Research studies carried out at the Institute of Education, London University with EPs from a range of services around the United Kingdom.

INTER-RATER RELIABILITY OF THE CAP

One of the criticisms of DA concerns the issue of reliability. Reliability of the LPAD “clinical” type of DA is considered of critical importance because it is a procedure which requires inferences based on complex information (Vaught & Haywood, 1990). Feuerstein argued (2002) that reliability measures are inappropriate for DA because the main goal is to change and modify the individual's functioning rather than to measure constant levels of performance. The goal of DA should not be to look for stability and consistency which characterize reliability, but rather for change and inconsistency. Tzuriel and Samuels (2000) point out that this argument relates to within-subject reliability, which is contradictory to the goal of change in the individual. This is a different concern, however, from the need for inter-rater reliability between two or more raters assessing the same individual in the same learning situation. Frisby and Braden (1992), Büchel and Scharnhorst (1993) raised doubts as to whether the interpretation of the individual's performance is indicative of the actual level of the tested individual or the subjective interpretation of the tester. Tzuriel and Samuels (2000) carried out a study of the LPAD, examining inter-rater reliability of the identification of deficient cognitive functions, the level of difficulty, the types of mediation and non-intellective factors.

There have been surprisingly few studies attempting to establish reliability despite the importance of this issue. Vaught and Haywood (1990) investigated inter judge reliability using two tests of the LPAD. The main rationale for this investigation was that without demonstrating agreement, the validity and the utility of DA is questionable. In both the Vaught and Haywood and Tzuriel and Samuels studies, there was poor agreement on the type and intensity of mediation. In all these studies the definitions of the deficient cognitive functions as described by Feuerstein were not standardized across raters. Tzuriel, in his reply to Frisby and Braden, commented that the lack of direct contact by examiners of active attempts to modify the learner makes it difficult to rate cognitive functions, even for experts. This would imply that rating different cognitive functions through observation only would be particularly challenging to achieve. Therefore this concern has been a major focus of the research on the CAP.

The studies carried out in South Lanarkshire were conducted with the earliest experimental version of the CAP, which used a seven-point rating scale. The participants were fully trained and experienced teachers of Feuerstein's Instrumental Enrichment and were therefore familiar with the deficient cognitive functions of Feuerstein's model.

While more familiar with cognitive abilities than a “regular” class-room teacher, the FIE teachers expressed concerns with the rating of the cognitive abilities due to their lack of confidence in the objectivity of their ratings. Teachers working without trained peers to consult with raised consistent concerns regarding the risk of subjectivity. Additionally, they found some difficulty in the use of a seven-point scale as there were too many scoring options. Feedback included comments such as “I was not sure what the difference was between a score of 5 or 6.” Where similar items were rated there was sometimes a lack of internal reliability, demonstrating the confusion. This feedback led to quantitative changes in the redevelopment of the scale as a four-point scale, along with more qualitative changes in the provision of level descriptors for each score, to guide raters as to the most appropriate level judgment.

The revised version of the CAP was used for studies at the Institute of Education in London (2004–2005). Two experimental groups of psychologists (n = 40) were given the identical introduction to the CAP, followed by a video presentation and asked to rate the cognitive abilities of a 5-year-old child seen working with a teacher and speech and language therapist. Participants were not allowed to consult with each other when rating the video scenario. Inter-rater reliability was calculated using the same method as in Tzuriel and Samuels (2000) study, by dividing the number of agreements by the total number of agreements and disagreements. Two levels of agreements were calculated. One was an exact rating (perfect agreement) and the second was agreement within half a point of the result (±0.5 of the score). One of the considerations in the rating scale was the N category. This category was designed to be used when the task did not permit observation of a specific cognitive ability or the rater was unsure how to score. Where the modal value was

N (i.e., the N category was the most popular and consistent response), this would seem to indicate that there was efficient use of N to indicate inability to rate the cognitive ability in this particular context. Where N was the modal value, it was impossible to carry out the second analysis (i.e., ±0.5) since N has no numerical value.

When rating the video scenario there was large variance across items for inter-rater reliability, from 36 percent to 100 percent agreement.

While the results were certainly influenced by the case scenario, they provided useful information where levels of inter-rater agreement were low (Figure 9.3). These results could indicate that more clarity is required.

The experimental version of the CAP used for these studies divided the cognitive abilities according to the Input, Elaboration, and Output phases of the LPAD (Feuerstein et al., 1995). The cognitive abilities rated with the least agreement appeared to fall into two categories:

  • Abilities that are included in the elaboration phase of the mental act according to Feuerstein. These abilities are by their very nature internal processes only manifested in the behavioral response stage of the act. Therefore, it can be interpreted that there were greater differences of opinion, since these processes were inferred from the behavior observed, rather than directly witnessed.
  • Abilities that involve processes that can be found at more than one phase of the mental act, for example, “Using a plan.” Since planning, or its opposite, impulsive behavior, is difficult to locate at a specific phase of the task by observation only, this may account for lower levels of agreement. A very impulsive learner may rush throughout the task and it may be difficult for the observer to locate this tendency in just one phase. Similarly, the cognitive act of “comparison” resulted in low agreement of less than 50 percent, while the act of “using the language of comparison” was rated with over 80 percent agreement. This showed that the output, i.e., the use of comparative language, was observable and resulted in higher levels inter-rater reliability.

The combined results of the South Lanarkshire and Institute of Education studies suggested much lower agreement in response to items where the phase of the act of thinking (input, elaboration, output) is not readily identified in the observation and consultation approach, as opposed to within a dynamic assessment where the assessor may intervene in the task. Therefore, in the subsequent version of the CAP, the cognitive abilities were no longer organized according to the phase model but regrouped under areas of mental processing activities as described by Luria. If, however, difficulties are clearly identifiable at a specific phase of thinking, the opportunity to target intervention at that phase is possible when developing an intervention plan for the learner.

THE USER-FRIENDLINESS OF THE CAP

These questions were addressed:

  • How user-friendly is the CAP in its present form?
  • What can be done to improve its clarity and accessibility?

Participants were asked to give a “best fit” rating to Sections A and B2 for their user-friendliness and also give an overall user-friendliness score (Table 9.8) in a questionnaire administered after rating the learner seen on video.

It was hypothesized that there would be a difference in levels of confidence (as expressed by higher scores on user-friendliness) between experienced DA users and less experienced users. The results of user-friendliness ratings were therefore analyzed according to length of prior training in DA. The participating psychologists were asked to indicate

TABLE 9.8 User-friendliness ratings.
0No, I found it almost impossible.
1No, it was quite difficult to follow.
2The CAP was not too difficult, but I need some help to complete it and understand the manual.
3Yes, I found the manual/scoring guide helped me to complete the scoring and it was quite easy.
4I found the CAP scoring sheet and scoring guide extremely user-friendly.
TABLE 9.9 Groups according to prior training.
ONo/Zero training in DA.
TTaster group. This consisted of a short in-service talk or one or two days of awareness training.
SShort training. This consisted of formal training in DA but a short course, for example, four days DA, or a combination of more than one short course.
FFull training. This consisted of lengthy training in DA or LPAD (eight days minimum).

their level of experience and prior training on their questionnaires (Table 9.9).

When comparing the user-friendliness ratings for different groups of cognitive abilities (more intellective in contrast to the affective and behavioral variables), greater ease was found in rating the latter, which could be due to the more observable nature of behavioral factors.

When compared according to “training group” (level of prior training in DA), psychologists who had received the most training reported the highest ratings for user-friendliness, particularly when giving an overall rating for the whole profile (Figure 9.4). Conversely, the “No training” group reported the least ease in completing either section or the overall profile. This confirmed that extent of prior training was an important variable in the understanding and ease with which the profile could be completed and has implications for the training needs of new users.

As a result of the pilot and experimental studies, changes to Section A abilities were made to make items more clearly differentiated and no longer linked to specific phases. Changes were made to the content and layout of the Scoring Guide of the CAP manual, giving more explanation, definition and typical classroom examples.

DYNAMIC ASSESSMENT TRAINING AS BACKGROUND KNOWLEDGE FOR THE COMPLETION OF CAP

The combined feedback from experimental focus groups and pilot users also addressed practical issues of quantity of training required in order to complete the profile.

This was investigated by examining possible differences between training groups in:

  • The average level of agreement (inter-rater reliability) of ratings of cognitive abilities.
  • The number of items rated with high (80+ percent) and moderate (65–79 percent) levels of agreement.
  • Possible differences in the use of the N Score.
TABLE 9.10 Average level of agreement between training groups (percentages).
Training groupAverage percentage agreement
over all Section A abilities
Average percentage agreement
over all Section B abilities
Zero5869
Taster6571
Short6766
Full6678

Few differences were found when the level of agreement was averaged for each group (Table 9.10). However, the “zero training” group had the lowest average inter-rater reliability level of the four groups. When Section B percentage agreements were averaged, the full training group showed the highest level of agreement but overall differences were small.

These small differences may be explained in different ways. First, the artificial nature of the experimental situation, that is, a video presentation without the benefit of consultation or background information, may account for the small size of the effect. Second, the amount of training already received may not be as influential a factor as the amount of experience of the use of dynamic assessment following training. Both possibilities will be explored in further studies and may have implications for current training models for DA.

Analysis of the N scores for Sections A and B was carried out. A possible relationship between the number of N scores awarded by the group and the group's level of training was explored. The reasons for awarding N scores include:

  • The particular cognitive ability is not observable in that context.
  • The scorer is unsure of what they are seeing.
  • The definition of a cognitive ability provided in the CAP may be unclear.
  • The introductory training provided on the CAP experimental days was insufficient.
TABLE 9.11 Number of N scores among training groups.
GroupLowest number of N
scores for group (L)
Highest number of N
scores for group (H)
Difference between
H and L (range)
Zero62115
Taster41814
Short92415
Full9189

In the first case it would be expected that an N score would be awarded at a high level of reliability. This was evident for a number of cognitive abilities, for example, “Responsiveness to Peers.” In the second possibility, the cognitive abilities that showed more inconsistent ratings were examined. It was hypothesized that the more inconsistent N ratings may indicate uncertainty by less experienced DA users, that is, the higher the level of training, the lower the number of N scores (Table 9.11).

Differences were found between the training groups when looking at the range of scores. The range was measured by the difference between the highest number of N scores and the lowest number of N scores for that group (Figure 9.5). The smallest range of scores was found for the most experienced group (full training), perhaps indicating more certainty in scoring, whereas the other groups show larger differences in assigning N scores.

Small or insignificant differences between the average number of N scores for the groups might indicate that the training day itself was insufficient to make a real difference between psychologists with different levels of prior training in DA. This conclusion, based on a one-day presentation of the CAP, is supported by comments of users from all categories. A one-off exposure with an isolated piece of video, itself an unnatural situation, would for most users, even those with prior experience in DA, be insufficient to lead to confident use of the profile. There was insufficient time to introduce and practice using every section of the CAP. Comments by many of the participants indicated that there was a

need for more initial training in the use of the CAP and this was true for more experienced DA users as well as those who were less familiar with the model. A typical comment was “I felt that scoring the CAP would get easier with more practice at what aspects to look for when assessing pupils' cognitive functions and learning styles.” This comment was from one member of the short training group, who scored the CAP's user-friendliness as 4 on each section and 3 overall, but nevertheless expressed the need for more exposure and practice.

Taken together, the results showed that some areas of the CAP were difficult to rate irrespective of training, and the specific video situation was more responsible for the results than individual differences between the participants. Therefore, more clarity was provided in the Scoring Guide (as also discussed in response to earlier research findings). Potential users may benefit from a variety of introductory training options to reflect the range of previous experience and to provide access to different professional groups of new CAP users.

SUMMARY OF RESULTS

The results obtained can be summarized as follows:

  • User-friendliness was moderate, evidenced by the majority of ease of use ratings at 2 or 3.
  • Levels of inter-rater reliability were negatively affected by the use of the phase model, particularly when the cognitive ability could appear at more than one phase.
  • The more behaviorally observable the ability, the higher the level of inter-rater reliability.
  • Inter-rater reliability was higher for the group that had received full DA training.
  • It was consistently felt that one day of training was too little.

These preliminary results should be viewed with caution, for these reasons:

  • Data was obtained using one video sequence which did not show the child in a classroom context.
  • The sample size of educational psychologists was relatively small. Division into subgroups according to levels of training, experience and exposure to DA reduced each sample size further.
  • Some variations in inter-rater reliability might be a product of lack of clarity about the definition of certain cognitive abilities. It was apparent from the data that some cognitive abilities were consistently difficult to rate.

APPLICATIONS OF THE USE OF THE CAP

Work on three main applications of the CAP was carried out by an experienced DA practitioner who is Dr. Jane Yeomans, Educational Psychologist in Sandwell and Dudley in the Midlands, United Kingdom.

The Use of the CAP for Assessment Summaries

The first study was carried out in a mainstream primary school with additional resourcing to meet the needs of children with specific language impairment (SLI).

In this case, the principal use of the CAP was to summarize a number of individual assessments. The educational psychologist was asked to assess five SLI pupils. Individual DA was carried out with all five pupils, with the CAP used to summarize the assessment data. This process identified three common deficient cognitive functions and two areas of difficulty in relation to response to teaching and mediation.

A group intervention plan was drawn up to target these needs and school staff were trained in the use of mediated learning. Over the next six months, the class was observed and feedback given to staff about their use of mediated learning techniques.

At the end of the period the CAP's rating scale was used to assess progress over time. As the class teacher now had some familiarity with the assessment concepts, she could judge the current status of each pupil in relation to the targeted cognitive abilities. The progress of one pupil was highlighted as a significant cause for concern. Another pupil on the other hand had made significant gains and was unlikely to require further targeted interventions.

This small-scale study illustrated a number of uses of the CAP:

  • First, for managing large amounts of assessment data relating to individual pupils.
  • Second, for monitoring progress over time without the repeat of time-consuming individual assessments.
  • Third, for use with professionals who do not have a specific professional background in the practice of DA, but who have put MLE into practice in their classrooms.
  • Fourth, to identify pupils who need additional interventions, or for whom specific interventions can be discontinued.

The CAP as a Consultation Tool

In this example, the classroom teacher was unfamiliar with cognitive education and the subject of the consultation was a boy aged eight who was experiencing significant difficulties in learning the basic skills of literacy and numeracy, together with difficulties associated with a diagnosis of dyspraxia. The referral information given to the psychologist, related to not finishing work and low self esteem, informed a decision to focus on completing the first part of Sections A and B, and to look at the learner's cognitive abilities and his response to teaching and mediation.

The psychologist stated that “The process of completing the CAP was relatively straightforward. The wording of the questions was accessible and the additional information given in the scoring guide helped to give enriched examples of what was meant by each statement. A great deal of information was gathered about the pupil.”

The ratings for each section were used for a solution-focused approach. Here, the client is asked to “describe life at a higher point on the scale” (Amjal & Rees, 2001). This technique was used in the consultation in order to elicit from the teacher the differences she might see with a small improvement in the cognitive ability being discussed. At this point in the consultation, the teacher's lack of knowledge of cognitive education proved to be a sticking point. The teacher was unable to think about changes in relation to process skills as she was very tied to the notion of outcomes. Another factor affecting further progress of intervention planning was that the teacher did not seem motivated to jointly seek solutions and clearly expected the Educational Psychologist to supply her with answers, “tips for teachers,” and was disappointed when these did not materialize. Despite these difficulties the CAP was useful in providing a structured information gathering tool without the initial need for a lengthy individual assessment.

The CAP as an Observation Tool

The third example is of a classroom observation. The CAP was used to observe a 7-year-old pupil during one of his usual maths classes. The observation was followed up by some individual assessment. The referral information about this pupil indicated difficulties with motor skills, attention and concentration, and little progress in literacy and numeracy. Sections A and C of the CAP were completed during the classroom observation. One of the outcomes of the subsequent assessment suggested that the pupil was able to use logical and inferential thought, provided that mediation was given to focus attention and reduce impulsivity. The psychologist stated that the use of the CAP in conjunction with individual assessment outcome led to an insight into his difficulties that might not have been apparent had the CAP not been used to structure the observation.

FUTURE DIRECTIONS

Work on the CAP to date has sampled a few of a potentially broader range of research and application issues. Next steps in the development of the CAP may include:

  • Further reliability trials using a wider range of video sequences showing individuals in a range of contexts.
  • Identification and widening of professional groups as direct CAP users for whom the CAP is accessible and user-friendly (psychologists, teachers, teaching assistants, therapists).
  • Identification of effective training techniques for the CAP, comprising initial training, post-training mentoring, including the use of distance and e-learning.
  • Identification of effective methods of dissemination of classroom methodology to other professionals (indirect CAP users) who have little or no exposure to DA or to cognitive education generally.
  • Identification of further information that can be provided by the CAP in order to support and guide interventions.
  • Comparative studies of outcomes for learners who have been profiled by the CAP as opposed to matched controls.
  • Validity studies, comparing the outcomes of CAP profiling with other forms of assessment.
  • Validity studies of the rationale for the inclusion of the various components of the CAP. For example, longitudinal comparative studies of which components have the most predictive validity.
  • Construct validity studies cross referencing the ratings on the CAP and other means of measuring specific cognitive abilities such as working memory. Many referrals to educational psychologists involve difficulties with working memory (Alloway et al., 2005).

In an era of rapid technological change where much of what is learned is obsolete in a relatively short space of time, the emphasis in education must move from a focus on content and product to a focus on the processes of thinking and problem solving. These processes can empower the learner to become independent, flexible and adaptable in order to meet the challenges of change. The CAP's focus on identifying and addressing process strengths and deficiencies can serve to orientate professionals towards explicit teaching of problem solving and thinking skills. These processes not only impact on curriculum skills but also on lifelong learning related to social, work and community environments.

References

Alloway, T. P., Gathercole, S. E., Adams, A., & Willis, C. (2005). Working memory abilities in children with special educational needs. Educational and Child Psychology, 22(4), 56–67.

Amjal, Y., & Rees, I. (2001). Solutions in Schools. London: BT Press.

Assessment Reform Group. (1999). Assessment of Learning: Beyond the Black Box. University of Cambridge School of Education.

Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.

Brooks, P. H., & Haywood, H. C. (2003). A pre-school mediational context: The bright start curriculum. In A. S. H. Seng, L. K. H. Pou, & O. S. Tan (Eds.), Mediated Learning Experience with Children: Applications Across Contexts (pp. 98–132). Singapore: McGraw-Hill Education.

Büchel, F. P., & Scharnhorst, U. (1993). The learning potential assessment device (LPAD): Discussion of theoretical and methodological problems. In J. H. M. Hamers, K. Sijtsma, & A. J. Ruijssenaars (Eds.), Learning Potential Testing (pp. 83–111). Amsterdam: Swets and Zeitlinger.

Cèbe, S., & Paour, J. L. (2000). Effects of cognitive education in kindergarten on learning to read in the primary grades. Journal of Cognitive Education and Psychology, 1(2), 177–200, www.iacep.coged.org.

Deutsch, R., & Reynolds, Y. (2000). The use of dynamic assessment by educational psychologists in the UK. Educational Psychology in Practice, 16, 311–331.

Deutsch, R. (2003). The meaning of mediation: Varying perspectives. International Journal of Cognitive Education and Psychology, 3(1), 29–46, www.iacep.coged.org/journal.

Feuerstein, R. (1979). The Dynamic Assessment of Retarded Performers. Glen-view, IL: Scott, Foresman and Company/University Park Press.

Feuerstein, R., Feuerstein R. S., Falik, L. H., & Rand, Y. (2002). The Dynamic Assessment of Cognitive Modifiability. Jerusalem: ICELP Press.

Feuerstein, R., Klein, P. S., & Tannenbaum, A. J. (Eds.). (1991). Mediated Learning Experience (MLE): Theoretical, Psychosocial and Learning Implications. London: ICELP/Freund Publishing.

Feuerstein, R., Rand, Y., Haywood, H. C., Kyram, L., & Hoffman, M. (1995). LPAD Examiner's Manual: New Experimental Version. Jerusalem: ICELP.

Feuerstein, R., Rand, Y., Hoffman, M. B., & Miller, R. (1980). Instrumental Enrichment. Baltimore, MD: University Park Press.

Frisby, C. L., & Braden, J. P. (1992). Feuerstein's dynamic assessment approach: A semantic, logical and empirical critique. Journal of Special Education, 26(3), 281–301.

Haywood, H. C. (1993). A mediational teaching style. International Journal of Cognitive Education and Mediated Learning, 3(1), 27–38.

Haywood, H. C., & Lidz, C. S. (2007). Dynamic Assessment in Practice: Clinical and Educational Applications. Cambridge: Cambridge University Press.

Lidz, C. S. (1991). Practitioner's Guide to Dynamic Assessment. New York: The Guilford Press.

Lidz, C. S. (2003). Early Childhood Assessment. New Jersey: John Wiley and Sons, Inc.

Lidz, C. S., and Elliott, J. G. (Eds.), (2000). Dynamic Assessment: Prevailing Models and Applications. New York: Elsevier Science Inc.

Luria, A. R. (1973). The Working Brain: An Introduction to Neuropsychology. New York: Basic Books.

Luria, A. R. (1980). Higher Cortical Functions in Man. New York: Basic Books.

McGuiness, C. (1999). From Thinking Skills to Thinking Classrooms. London: HMSO.

Monsen, J., Graham, B., Frederickson, N., & Cameron, R. J. (1998). Problem analysis and professional training in educational psychology. Educational Psychology in Practice, 13(4), 234–249.

National Curriculum Guidelines (2001). Df ES: www.standards.dfes.gov.uk/thinkingskills/guidance.

Tzuriel, D. (1992). The dynamic assessment approach: A reply to Frisby and Braden. Journal of Special Education, 26(3), 302–324.

Tzuriel, D. (2001). Dynamic Assessment of Young Children. New York: Kluwer Academic/Plenum Publishers.

Tzuriel, D., & Haywood, H. C. (1992). The development of interactive-dynamic approaches to assessment of learning potential. In H. C. Hay-wood & D. Tzuriel, (Eds.), Interactive Assessment (pp. 3–30). New York: Springer-Verlag.

Tzuriel, D., & Samuels, M. (2000). Dynamic assessment of learning potential: Inter-rater reliability of deficient cognitive functions, types of mediation, and non-intellective factors. Journal of Cognitive Education and Psychology, 1(1), 41–64).

Vaught, S. R., & Haywood, H. C. (1990). Interjudge agreement in dynamic assessment: Two instruments from the learning potential assessment device. The Thinking Teacher, 5(2), 1–13.

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.

Vygotsky, L. S. (1986). Thought and Language. Cambridge, MA: The MIT Press.

Wagner, P. (2000). Consultation: Developing a comprehensive approach to service delivery. Educational Psychology in Practice, 16(1), 9–18.

1 For a comprehensive overview of the many dynamic assessment models in use, see Lidz and Elliott (2001).

2 It should be noted that the Section B cited in the research studies is now part of Section A. The current Section B was not evaluated in these studies.

More From encyclopedia.com

About this article

The Cognitive Abilities Profile

Updated About encyclopedia.com content Print Article

You Might Also Like

    NEARBY TERMS

    The Cognitive Abilities Profile