National Assessment of Educational Progress

views updated

National Assessment of Educational Progress

BIBLIOGRAPHY

The National Assessment of Educational Progress (NAEP) was mandated by Congress in 1969 to provide accurate measurement and reporting of levels and trends in academic achievement of U.S. elementary and secondary students. NAEP has met this goal through regular testing of students in selected subjects and most often at the fourth, eighth, and twelfth grades. For most NAEP reporting, student achievement in the most recent assessment is compared with student achievement in previous years in the same subject and grade to identify changes and trends.

NAEP provides a continuing assessment of the core subjects of reading, mathematics, writing, and science. National and state assessments in reading and mathematics are conducted every other year, and every four years in science and writing. NAEP also tests achievement in other subjects that are widely taught in the schools, such as U.S. history, civics, geography, economics, and the arts, but they are assessed less often and sometimes at only one grade. In addition NAEP conducts special studies on topics such as long-term achievement trends, the influence of course-taking on outcomes, assessment results in urban schools, and the effects of educational technology.

NAEP policy setting, administration, and implementation are conducted by several organizations. NAEP policy is set by the National Assessment Governing Board (NAGB), whose members are appointed by the secretary of education. Members include governors, educators, school administrators, state legislators, and the public. NAGB selects the subject areas for assessment, develops the general objectives and specifications for the assessment, prepares guidelines for reporting, and is responsible for making all policy decisions. NAEP administration is the responsibility of the National Center for Education Statistics (NCES), which oversees assessment development and scoring. NAEP implementation is carried out through contracts, grants, and cooperative agreements with qualified organizations and individuals. They define the material to be assessed within each subject and at each grade level, prepare the assessment instruments, select the school and student samples, score student responses, analyze the data, and write the NAEP reports.

The most important part of the assessment process is the development of a content framework. It defines what students should know and be able to do at each assessed grade level, how assessment items should be written to address that content, and what should be reported. The framework is prepared from the work of many content experts, teachers, curriculum specialists, policymakers, and public representatives. It is then given final review and approval by NAGB.

The assessments include multiple-choice items and constructed-response items that require students to give short or extended written responses. NAEP also collects background data through questionnaires completed by the students, their teachers, and school administrators. Some data, for example student demographics such as gender, race and ethnicity, and region, are standard for all NAEP assessments. Background information is also obtained on factors that may influence academic performance, such as time spent on homework, instructional practices, and teacher background. Additional questions may directly relate to the subject being assessed. For example, in economics, students might be queried about whether they have had a course in economics in their high school careers.

NAEP uses probability sampling to select a representative group of students to participate in the assessment. The sample size for each state averages one hundred schools and three thousand students at each grade level and within each subject. The sample is sufficiently large so it can produce reliable and valid results at the national level at each grade level and for subgroups of students defined by specified characteristics such as gender, race and ethnicity, eligibility for a lunch subsidy, and region. State-level results are also reported for reading, mathematics, writing, and science.

NAEP assessments also use item sampling of subject matter content. In item sampling, each student is only given a sample of all the test items. This design means that NAEP can administer a large number of items, and because the assessment blocks are linked, valid and reliable achievement results can be constructed for representative samples of students in each subject area and grade level. By law, only the overall results for a representative sample of students are reported. Given this sampling, it is impossible to report individual student results.

Subject-matter results are reported by averages and percentiles and by achievement levels. The primary means of reporting are three achievement levels defined by NAGB. Students at the basic level demonstrate partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade. Students at the proficient level demonstrate solid academic performance for each grade assessed. These students demonstrate competency over challenging subject matter, including subject-matter knowledge, application of such knowledge to real-world situations, and analytical skills appropriate to the subject matter. Students at the advanced level demonstrate superior performance.

The scores and achievement levels are developed independently for each subject; thus the results cannot be compared across subjects. In general, however, the results for subjects show that most students score either below or at the basic level, far fewer score at the proficient level, and very few score at the advanced level. For example, the 2005 twelfth-grade science assessment reported the following results for students: below basic, 32 percent; basic, 39 percent; proficient, 26 percent; and advanced, 3 percent.

NAEP results, including scale scores and achievement levels for subgroups of students as well as background data that relate to student achievement, are used for research in a wide variety of studies. The NCES provides training on their use for primary or secondary education analysis and, in combination with other student and school data sets, makes the NAEP data available to researchers. Studies have been conducted on how student achievement is affected by the prior academic work of students, student experiences outside of school, class size, and teacher characteristics. In addition, student, teacher, and school factors have been investigated to identify their effects on achievement levels of ethnic, gender, and regional groups.

NAEP is not without critics despite the money and effort expended to develop accurate measures of achievement. Levels of understanding, but not trends, may be understated because of the voluntary nature of the assessment and the lack of student motivation to score well, especially when it is administered to graduating seniors. The measures are aggregate ones for the national or state level; no individual or school-specific results are reported. NAEP may also create incentives for establishing a national curriculum in the schools.

SEE ALSO National Family Health Surveys; National Longitudinal Study of Adolescent Health; National Longitudinal Survey of Youth; Panel Study of Income Dynamics; Surveys, Sample

BIBLIOGRAPHY

National Center for Education Statistics. 2005. The Nation's Report Card: An Introduction to the National Assessment of Educational Progress (NAEP ). Washington, DC: U.S. Government Printing Office.

Stephen Buckles

William B. Walstad

About this article

National Assessment of Educational Progress

Updated About encyclopedia.com content Print Article

NEARBY TERMS

National Assessment of Educational Progress