Chapter 4 Test Design

4.1 Introduction

The intent of this chapter is to show how the assessment design supports the purposes of Smarter Balanced assessments. Test design entails developing a test philosophy (i.e., Theory of Action); identifying test purposes; and determining the targeted examinee populations, test specifications, item pool design, and other features (Schmeiser & Welch, 2006). The Smarter Balanced Theory of Action, test purposes, and the targeted examinee population were outlined in the Overview and Chapter 1 of this report.

4.2 Evidence-Centered Design in Constructing Smarter Balanced Assessments

Evidence-centered design (ECD) is an approach to the creation of educational assessments in terms of reasoning about evidence (arguments) concerning the intended constructs. ECD begins with identification of claims or inferences users want to make concerning student achievement. Evidence needed to support those claims is then specified, and finally, items/tasks capable of eliciting that information are designed (Mislevy et al., 2003). Explicit attention is paid to the potential influence of unintended constructs. ECD accomplishes this in two ways. The first is by incorporating an overarching concept of assessment as an argument from imperfect evidence. This argument makes explicit the claims (the inferences that one intends to make based on scores) and the nature of the evidence that supports those claims (Hansen & Mislevy, 2008; Mislevy & Haertel, 2006). The second is by distinguishing the activities and structures involved in the assessment enterprise in order to exemplify an assessment argument in operational processes. By making the underlying evidentiary argument more explicit, the framework makes operational elements more amenable to examination, sharing, and refinement. Making the argument more explicit also helps designers meet diverse assessment needs caused by changing technological, social, and legal environments (Hansen & Mislevy, 2008; Zhang et al., 2009). The ECD process entails five types of activities, or layers, of assessment. The activities focus on 1) the identification of the substantive domain to be assessed; 2) the assessment argument; 3) the structure of assessment elements such as tasks, rubrics, and psychometric models; 4) the implementation of these elements; and 5) the way they function in an operational assessment, as described below.

  • Domain Analysis. In this first layer, domain analysis involves determining the specific content to be included in the assessment. Smarter Balanced uses the state standards as its content domain for ELA/literacy and mathematics. Domain analysis was conducted by the developers of the state standards, who first developed college and career readiness standards, to address what students are expected to know and be able to do by the time they graduate from high school. This was followed by development of the K–12 standards, which address expectations for students in elementary through high school.
  • Domain Modeling. In domain modeling, a high-level description of the overall components of the assessment is created and documented. For Smarter Balanced, the components include computer-adaptive summative and interim assessments in ELA/literacy and mathematics. The domain framework was developed by organizing the state standards into domain areas that form the structure of test blueprints and reporting categories. This overall structure was created in the course of Smarter Balanced content specification development.
  • The Conceptual Assessment Framework. Next, the conceptual assessment framework is developed. For Smarter Balanced, this step was accomplished in developing the Smarter Balanced content specifications, which identify major claim structure, targets within claims, and the relationship of those elements to underlying content of the state standards. In this step, the knowledge, skills, and abilities to be assessed (i.e., intended constructs, targets of assessment); the evidence that needs to be collected; and the features of the tasks that will elicit the evidence are specified in detail. Ancillary constructs that may be required to respond correctly to an assessment task but are not the intended target of the assessment are also specified (e.g., reading skills in a mathematics assessment). By identifying any ancillary knowledge, skills, and abilities (KSAs), construct-irrelevant variance can be identified a priori and minimized during item and task development. Potential barriers created by the ancillary KSAs can be removed or their effects minimized through the provision of appropriate access features. The item and task specifications describe the evidence required to support claims about the assessment targets and also identify any ancillary constructs.
  • Implementation. This layer involves the development of the assessment items or tasks using the specifications created in the conceptual assessment framework just described. In addition, scoring rubrics are created, and the scoring process is specified. Smarter Balanced items, performance tasks, and associated scoring rubrics were developed starting in the spring of 2012.
  • Delivery. In this final layer, the processes for administration and reporting are created. The delivery system describes the adaptive algorithm, collection of student evidence, task assembly, and presentation models required for the assessment and how they function together. The ECD elements chosen lead to the best evaluation of the construct for the intended test purposes.

4.3 Content Structure

In developing and maintaining a system of assessments, the goal of Smarter Balanced is to ensure that the assessment’s measurement properties reflect industry standards for content, rigor, and performance. A key step in this direction is to ensure that the Smarter Balanced assessments are aligned with the state standards. Figure 4.1 briefly encapsulates the Smarter Balanced content structure.

Components of Smarter Balanced Test Design

Figure 4.1: Components of Smarter Balanced Test Design

The state standards are the content standards in ELA/literacy and mathematics that many states have adopted. Because the state standards were not specifically developed for assessment, they contain extensive rationale and information concerning instruction. Therefore, adopting previous practices used by many state programs, Smarter Balanced content experts produced content specifications in ELA/literacy and mathematics that distill assessment-focused elements from the state standards (Smarter Balanced, 2017b, 2017d). Item development specifications (https://contentexplorer.smarterbalanced.org/test-development) are then based on the content specifications. Each item is aligned to a specific claim and target and to a state standard.

Within each of the two subject areas in grades 3-8 and high school, there are four broad claims. Within each claim, there are a number of assessment targets. The claims in ELA/literacy and mathematics are given in Table 4.1.

Table 4.1: CLAIMS FOR ELA/LITERACY AND MATHEMATICS
Claim ELA/Literacy Mathematics
1 Reading Concepts and Procedures
2 Writing Problem-Solving
3 Speaking/Listening Communicating Reasoning
4 Research Modeling and Data Analysis

Currently, only the listening part of ELA/literacy claim 3 is assessed. In mathematics, claims 2 and 4 are reported together as a single subscore, so there are only three reporting categories for mathematics, but four claims.

Because of the breadth in coverage of the individual claims, targets within each claim were needed to define more specific performance expectations. The relationship between targets and state standards elements is made explicit in the Smarter Balanced content specifications (Smarter Balanced, 2017b, 2017d).

The Smarter Balanced item and task specifications (Smarter Balanced, 2015b) are comprised of many documents, all of which are based on Smarter Balanced content specifications. These documents provide guidance for translating the Smarter Balanced content specifications into actual assessment items. In addition, guidelines for bias and sensitivity (Smarter Balanced, 2022a), accessibility and accommodations (Smarter Balanced, 2016b, 2023c), and style (Smarter Balanced, 2015c) help item developers and reviewers ensure consistency and fairness across the item bank. The specifications and guidelines were reviewed by member states, school districts, higher education representatives, and other stakeholders. The item specifications describe the evidence to be elicited and provide sample task models to guide the development of items that measure student performance relative to the target.

4.4 Summative Assessment Blueprints

Test specifications and blueprints define the knowledge, skills, and abilities intended to be measured on each student’s test event, and explain how skills are sampled from a set of content standards (i.e., the state standards). Specifically, a test blueprint is a formal document that guides the development and assembly of an assessment by explicating the following types of essential information:

  • Content (claims and assessment targets) that is included for each assessed subject and grade
  • Relative emphasis of content standards generally indicated as the number of items or percentage of points per claim and assessment target
  • Depth of knowledge (DOK) required by test items, indicating the complexity of item types for each claim and assessment target
  • Additional rules or specifications needed to administer the test

The Smarter Balanced summative blueprints were developed with broad input from member states, partners, and stakeholders, and reflect the depth and breadth of the performance expectations of the state standards. Some innovative features of the Smarter Balanced blueprints are: a) the inclusion of both computer adaptive (CAT) and performance task (PT) components, and b) the provision of a variety of both machine-scored and human-scored items and response types.

The use of CAT methodologies helps ensure that students across the range of proficiency have an assessment experience with items well targeted to their skill level. CAT tests are also more efficient because they provide a higher level of score precision than fixed-form tests with the same number of items. The PT is administered on a computer but is not computer adaptive. PTs are intended to measure multiple standards in a coherent task that requires the use of integrated skill sets. They measure capacities such as essay writing, research skills, and complex analysis, which are not as easy to assess with individual, discrete items.
Responses from both CAT and PT components are combined to cover the test blueprint in a grade and content area and are used to produce the overall and claim scale scores. Figure 4.2 is a conceptual diagram of how claims are distributed across the adaptive and performance task parts of the tests.

Claim Distribution in Test Blueprints

Figure 4.2: Claim Distribution in Test Blueprints

Links to the Smarter Balanced ELA/literacy and mathematics Summative Assessment Full Blueprints for grades 3-8 and high school for 2021-22 are provided:

ELA/L Full

Math Full

4.4.1 Adjusted (Shortened) Blueprint

Beginning in the 2021-22 administration year, Smarter Balanced maintains an adjusted (shortened) blueprint for ELA/literacy and mathematics in addition to the full blueprint. Members select one blueprint or the other for use in their state or territory. The adjusted (shortened) blueprint covers the same skills and knowledge as the full blueprints, but the number of questions has been reduced by about half for the CAT component, with the PT component unchanged. The adjusted (shortened) blueprint is used with the same pool of items for mathematics and almost exactly the same4 pool of items for ELA/literacy as used for the full blueprint.

Links to the Smarter Balanced ELA/literacy and mathematics Summative Assessment Adjusted (Shortened) Blueprints for grades 3-8 and high school for 2021-22 and are provided:

ELA/L Adjusted

Math Adjusted

4.5 Performance Task Design

As shown in the test blueprints, performance tasks are an integral part of the Smarter Balanced test design, and they fulfill a specific role in the test blueprint for a grade and content area. Performance tasks are intended to measure the ability to integrate knowledge and skills across multiple content standards, a key component of college and career readiness. Performance assessments give students opportunities to demonstrate their ability to find, organize, or use information to solve problems; undertake research; frame and conduct investigations; analyze and synthesize data; and/or apply learning to novel situations.

Smarter Balanced performance tasks were constructed so they can be delivered effectively in the school/classroom environment (Dana & Tippins, 1993). Requirements for task specifications included, but were not limited to, compatibility with classroom activities, materials and technology needs, and allotted time for assessment. Performance tasks adhere to specifications used by item writers to develop new tasks that focus on different content but are comparable in contribution to the blueprint.

All Smarter Balanced performance tasks consist of three basic components: stimulus presentation, information processing, and scorable product(s) or performance(s). Stimuli for Smarter Balanced performance tasks are provided in various forms (e.g., readings, video clips, data). “Information processing” means student interactions with the stimulus materials and their content. It could include note-taking, data generation, and any other activities that increase students’ understanding of the stimulus content or the assignment. All activities within a task must have a rationale for inclusion (e.g., to increase understanding, for scaffolding, as early steps in product creation, or for product creation).

In ELA/literacy, each performance task comprises a targeted research effort in which students read sources and respond to one research item, followed by an essay. During the research component, students may take notes to which they may later refer. Students then write a full essay drawing from source material and research notes. Claim-level results in writing and research are based on both CAT and performance task item responses.

In mathematics, each performance task comprises a set of stimulus materials and a follow-up item set consisting of up to six items in claims 2, 3, and 4. These are combined with CAT items in claims 2, 3, and 4 to satisfy the blueprint and create a claim 3 score and a combined claim 2 and 4 score. Performance tasks address an integrated scenario in middle and high school and a common theme in grades 3-5. Claim 1 is only assessed in CAT items. Math practices, i.e., Claim 2-4 are assessed through the PT and a few CAT items.

4.6 Item and Task Specifications

The item and task specifications bridge the distance from the content specifications and achievement levels to the assessment itself. While the content specifications establish the Consortium’s claims and the types of evidence that are needed to support these claims, more specificity is needed to develop items and tasks that measure the claims.

The first iteration of the item and task specifications was developed in 2011. In early 2012, the Consortium held a series of showcases where the contractors introduced the item and task specifications and collected feedback from member states. The item and task specifications were revised during the first quarter of 2012 using this feedback.

A small set of items were developed and administered in fall 2012 during a small-scale trial using the revised item and task specifications. This provided the Consortium with the first opportunity to administer and score the new item types. During the small-scale trial, the Consortium also conducted cognitive laboratories to better understand how students respond to various types of items (American Institutes for Research, 2013). The cognitive laboratories used a think-aloud methodology in which students speak their thoughts while working on a test item. The item and task specifications were again revised based on the findings of the cognitive laboratories and the small-scale trial. These revised specifications were used to develop items for the 2013 pilot test, and they were again revised based on 2013 pilot test results and subsequent reviews by content experts.

The Smarter Balanced Item and Task Specifications (Smarter Balanced, 2015b) are designed to ensure that assessment items measure the assessment’s claims. Indeed, the purpose of item and task specifications is to define the characteristics of items and tasks that will provide evidence to support one or more claims. To do this, the item and task specifications delineate types of evidence that should be elicited for each claim within a grade level. Then, the specifications provide explicit guidance on how to write items in order to elicit the desired evidence.

Item and task specifications provide guidelines on how to create items specific to each claim and assessment target through the use of task models. In mathematics, a task model provides a description of an item/task’s key features. These task models describe the knowledge, skills, and processes being measured by each of the item types aligned to particular targets. In addition, task models sometimes provide examples of plausible distractors. Exemplar items are provided within every task model. In ELA/literacy, these functions are carried out through item specifications.

Task models were developed for each grade level and target to delineate the expectations of knowledge and skills to be represented through test items at each grade. In addition, both ELA/literacy and mathematics item and stimulus specifications provide guidance about grade appropriateness of task and stimulus materials (the materials a student must refer to when working on a test item). The task and stimulus models also provide information on characteristics of stimuli or activities to avoid because they are not germane to the knowledge, skill, or process being measured.

Guidelines concerning what to avoid in item writing are important because they underscore the Consortium’s efforts to use universal design principles to develop item accessible to the widest range of students possible. As the name suggests, the concept of universal design aims to create items that accurately measure the assessment target for all students. At the same time, universal design recognizes that one solution rarely works for all students. Instead, this framework acknowledges “the need for alternatives to suit many different people” (Rose & Meyer, 2000, p. 4).

To facilitate the application of universal design principles, item writers are trained to consider the full range of students who may answer a test item. A simple example of this is the use of vocabulary that is expected to be known by all third-grade students versus only those third-grade students who play basketball. Almost all third-grade students are familiar with activities (e.g., recess) that happen during their school day, while only a subset of these students will be familiar with basketball terms like “double dribble,” “layup,” “zone defense,” or “full-court press.”

Item specifications discuss accessibility issues unique to the creation of items for a particular claim and/or assessment target. Accessibility issues concern supports that various groups of students may need to access item content. By considering the supports that may be needed for each item, item writers are able to create items that can be adapted to a variety of needs.

The use of universal design principles allows the Consortium to collect evidence on the widest possible range of students. By writing items that adhere to item and task specifications, the Consortium is assured that assessments measure the claims and assessment targets established in the content specifications, as well as the knowledge, skills, and processes found in the state standards for all students for whom the assessment is appropriate.

4.7 Item and Task Development

The Consortium’s test development cycle is iterative, involving experts from various education-related fields, and is based on assessment-related research and best practices. Each item that is used operationally on the Smarter Balanced summative assessment has been reviewed and/or written by educators. The active involvement of educators is critical to the success of the item-writing activities. Educators engage with students on a daily basis, and they understand the ways in which students can demonstrate their knowledge. Their involvement in item writing helps ensure that the items included in the assessment system are appropriate for the grade level and provide valid evidence of student learning. Section 4.7.1 describes vendor-managed item development that Smarter Balanced oversees. Section 4.7.2 describes member-managed item writing led by states. Section 4.7.3 explains the item review process that applies to all items.

4.7.1 Item Writing

The Consortium works with educators throughout the test development cycle to develop items. All K–12 participants:

  • are certified/licensed to teach ELA/literacy and/or mathematics in a K–12 public school;
  • are currently teaching in a public school within a Smarter Balanced governing state;
  • have taught ELA/literacy and/or mathematics in grades 3-8 and/or high school within the past three years (second-grade teachers are also recruited to participate in the development of grade 3 items and/or tasks);
  • have previously reviewed part or all of the state standards for the content area for which they are writing items and/or performance tasks;
  • have submitted a statement of interest that describes their interest in developing Smarter Balanced items and/or performance tasks, along with their qualifications for doing so; and
  • have completed training and achieved qualifications through a certification process.

Qualifications for higher education faculty include:

  • current employment with, or recent retirement from, a college or university located within a Smarter Balanced member state;
  • having taught developmental and/or entry-level courses in English, composition, mathematics, statistics, or a related discipline within the last three years;
  • having previously reviewed part or all of the state standards for the content area in which they are interested in writing items and/or performance tasks; and
  • having completed training and achieved qualifications through the certification process.

The Consortium’s staff trains contractors and educators on the item specifications, ELA/literacy stimulus specifications, and the guidelines for accessibility, bias, and sensitivity, as described in the next section.

Prior to the spring 2013 pilot test, the Consortium engaged 136 educators in K–12 and higher education from 19 member states to write items. Prior to the spring 2014 field test, 184 educators in K–12 and higher education from 16 member states participated in item writing. The items developed in this process were used in the 2014 field test and in the 2015 embedded field test. These items account for a large portion of the items used in the 2021-22 summative assessment.

4.7.1.1 Training for Item Writers

For the development of all operational items in the 2021-22 summative assessment, educators participated in a series of facilitated online webinars in order to qualify as item writers. To facilitate participation, the Consortium scheduled multiple sessions in different time zones, including evening sessions. In addition to the facilitated sessions, the Consortium provided training modules that covered background on the Consortium, assessment design principles, and detailed information about item and performance task development. All modules were available in three formats: a PowerPoint presentation with notes, a streaming presentation with narration that could be viewed online, and a downloadable audio/video presentation.

For all item writing, including more recent processes, item writers are specifically trained on the Consortium’s content and item specifications, stimulus specifications, sensitivity and bias guidelines, and general accessibility guidelines. Training on these specifications and guidelines helps ensure that item writers are trained to write items that allow the widest possible range of students to demonstrate their knowledge, skills, and cognitive processes with regard to the content. This means that item writers need to understand the content for which they were writing items, as well as accessibility and sensitivity issues that might hinder students’ ability to answer an item. Item writers are also trained to be aware of issues that might unintentionally bias an item for or against a particular group.

4.7.2 Member-Managed Item Development

The Consortium invites member states to participate in a separate effort to write items. This voluntary effort, known as State-Managed Item Development, is conducted to build the capacity of states to write items and to support the overall sustainability of the Consortium. To this end, three states (Hawaii, Oregon, and Washington) participated in the member-managed field test item development opportunity. During this opportunity, educators within the three states developed approximately 450 items in ELA/literacy and mathematics across grades 3-8 and high school.

4.7.3 Item Reviews

Once items are written, groups of educators review items and item stimuli prior to field testing. Item stimuli refer to the reading passages used on the ELA/literacy assessments or to the stimulus materials provided in the performance tasks in both ELA/literacy and mathematics. The reviews take into consideration accessibility, bias/sensitivity, and content.

Prior to the spring 2013 pilot test, 122 ELA/literacy educators and 106 mathematics educators reviewed items and performance tasks for accessibility, bias/sensitivity, or content, and 60 educators reviewed the ELA/literacy stimuli. Prior to the spring 2014 field test, 107 ELA/literacy educators and 157 mathematics educators from 14 states reviewed items and performance, and 95 educators from 13 states reviewed the ELA/literacy stimuli.

The educator qualifications for the accessibility, bias/sensitivity, and content reviews are the same as the educator qualifications for item writing, except that participants are not required to submit a statement of interest. In addition, it is preferred (but not required) that educators have previous experience reviewing items, tasks, and/or stimuli.

During the committee reviews, educators specifically compare the items against the quality criteria for accessibility and for bias and sensitivity. The reviewers identify and resolve or reject any item, stimulus, or performance task that does not pass the criteria. This review removes any aspect that may negatively impact a student’s ability to access stimuli, items, or performance tasks, or to elicit valid evidence about an assessment target. Items flagged for accessibility, bias/sensitivity, and/or content concerns are either revised to address the issues identified by the panelists or removed from the item pool.

The committee also compares each stimulus, item, and performance task against the ELA/literacy or mathematics quality criteria. This review focuses on developmental appropriateness and alignment of stimuli, items, and performance tasks to the content specifications and appropriate depths of knowledge. Panelists in the content review also check the accuracy of the content, answer keys, and scoring materials. Items flagged for content concerns are either revised or removed from the item pool.

Details about the item development process in ELA/literacy and mathematics are found in Appendix A. These are the steps each item goes through before it can be presented to students.

4.8 Field Testing

After items pass the content, accessibility, bias, and sensitivity reviews, they become eligible for field testing. The first field test for developing the Smarter Balanced assessments was a stand-alone field test in 2014 prior to the first operational administration. Details of the 2014 field test can be found in Chapters 7, 8, and 9 of the 2014-15 Smarter Balanced Summative Technical Report (Smarter Balanced, 2016a).

Both CAT and PT items are field tested. For field testing in years subsequent to 2014, a small number of CAT items are embedded within each student’s operational CAT. These are called embedded field test (EFT) items. The number of EFT items administered per student is given in Table 4.2. CAT EFT items are administered randomly across a range of allowable positions within test segments as follows:

  • ELA/literacy: positions 5-34 (ELA/literacy has only one segment)
  • Mathematics Calculator: positions 5-15 within the calculator segment
  • Mathematics Non-Calculator: positions 5-10 within the non-calculator segment

Two EFT items are embedded in the mathematics CAT. For grades 6 and higher, one item is embedded in the calculator segment and one in the non-calculator segment. In the ELA/literacy CAT, three to four EFT items are administered to each examinee. The number of EFT items administered in ELA/literacy is a range instead of a constant because much of the ELA/literacy content is organized into passage sets and the number of items in a set varies.

Table 4.2: NUMBER OF FIELD-TEST ITEMS TO BE ADMINISTERED PER STUDENT
Grade ELA/Literacy Math: Calc Math: No Calc
3 3 to 4 N/A 2
4 3 to 4 N/A 2
5 3 to 4 N/A 2
6 3 to 4 1 1
7 3 to 4 1 1
8 3 to 4 1 1
11 3 to 4 1 1

4.8.1 Field Testing of Performance Tasks

Performance tasks (PTs) are field tested as stand-alone fixed forms consisting of three to six items per task. Each PT is randomly administered to approximately 2,000 students in total across all participating states. Thus, only a small number of randomly selected students receive a field test PT. Students who take a field test PT do not take an operational PT. These students take a CAT that has more operational items than the regular CAT to compensate for the lack of operational PT items. A link is provided to the blueprint for the CAT taken by students who take a field test PT, called the enhanced CAT blueprint. Field testing was temporarily suspended in the 2020-21 administration year due to pandemic. The most recent field testing was conducted in the 2021-22 administration year.

4.9 Item Scoring

For those items that cannot be machine scored, the Consortium engages content experts in range-finding activities. Range finding improves the consistency and validity of scoring for the assessment. During range finding, educators focus on the performance tasks for ELA/literacy and mathematics. The participants review student responses against item rubrics, validate the rubrics’ accuracy, and select the anchor papers that would be used by scorers during operational scoring of test items. In mathematics, educators also review constructed response items for grades 7, 8, and high school. Following the 2013 pilot test, 102 participants from 20 states were engaged in range finding. After the spring 2014 field test, 104 educators participated in range finding. After the 2014–15 embedded field test, 34 educators participated in range finding.

The educator qualifications for range finding are the same as the educator qualifications for item writing. It is preferred (but not required) that educators have previous range-finding experience.

A rubric validation activity is conducted to verify correct scoring for machine-scored items. For multiple-choice items, this is a simple key check. For other item types, such as grid interaction items (drag-and-drop), matching tables, or equation entry, the procedure involves looking at a sample of raw student responses (screen coordinates or keystrokes) and assuring that the raw response was scored correctly. In the course of this process, reviewers may find unexpected responses that require adjustment of the scoring procedure to account for a wider response range. Item-scoring software is then changed accordingly.

4.10 Item Quality Control and Data Review

After items are field tested, the Consortium carries out statistical analyses of field test data to determine the statistical quality of the items. On the basis of these results, some field-tested items are put into operational use, some are rejected from operational use, and others go through a process called data review. In a data review, items flagged based on statistical criteria are reviewed by educators in collaboration with Smarter Balanced staff, for possible content flaws, bias, and other features that might explain the statistical qualities. Items that go through data review may be subsequently revised and field-tested again in a future year, rejected, or accepted for operational use.

4.11 CAT Algorithm

For the Smarter Balanced operational test, an item-level, fully adaptive CAT component is administered in ELA/literacy and mathematics. The adaptive part delivers blueprints in a manner that efficiently minimizes measurement error and maximizes information. Smarter Balanced members work with their service provider to adopt an algorithm that delivers the published blueprint.

4.12 Content Alignment

Content alignment addresses how well individual test items, test blueprints, and the tests themselves represent the intended construct and support appropriate inferences. With a computer adaptive test, a student’s test form is a sampling of items drawn from a much larger universe of possible items and tasks. The sampling is guided by a blueprint. Alignment studies investigate how well individual tests cover the intended breadth and depth of the underlying content standards. For inferences from test results to be justifiable, the sample of items in each student’s test has to be an adequate representation of the broad domain, providing strong evidence to support claims being made from the test results.

Four alignment studies have been conducted to examine the alignment between Smarter Balanced tests and the state standards. The Human Resources Research Organization (HumRRO, 2016) conducted the first alignment study. HumRRO’s comprehensive study centered around the assumptions of evidence-centered design (ECD), which examined the connections in the evidentiary chain underlying the development of the Smarter Balanced foundational documents (test blueprints, content specifications, and item/task specifications) and the resulting summative assessments. Among those connections was the alignment between the Smarter Balanced evidence statements and content specifications, and the alignment between the Smarter Balanced blueprint and the content specifications. Results from this study were favorable in terms of the intended breadth and depth of the alignment for each connection in the evidentiary chain.

In 2016, the Fordham Institute and HumRRO investigated the quality of the Smarter Balanced assessments relative to Council of Chief State School Officers (CCSSO) criteria for evaluating high-equality assessments. In particular, the Smarter Balanced assessments were investigated to see if they placed strong emphasis on the most important content for college and career readiness and if they required that students demonstrate the range of thinking skills called for by those standards. Fordham Institute reviewed grades 5 and 8 ELA/literacy and mathematics, and HumRRO reviewed high school ELA/literacy and mathematics.

  • Doorey & Polikoff (2016) rated Smarter Balanced grades 5 and 8 ELA/literacy assessments an excellent match to the CCSSO criteria for content in ELA/literacy, and a good match for depth in ELA/literacy.
  • Fordham Institute rated Smarter Balanced grades 5 and 8 mathematics assessments as a good match to the CCSSO criteria for content in mathematics, and a good match to the CCSSO criteria for depth in mathematics.
  • HumRRO (2016) rated the Smarter Balanced high school ELA/literacy assessments an excellent match to the CCSSO criteria for content in ELA/literacy, and a good to excellent match for depth in ELA/literacy.
  • HumRRO (2016) rated the Smarter Balanced high school ELA/literacy assessments a good to excellent match to the CCSSO criteria for content in ELA/literacy, and a good to excellent match for depth in ELA/literacy.

An additional external alignment study, completed by WestEd Standards, Assessment, and Accountability Services Program (2017), employed a modified Webb alignment methodology to examine the summative assessments for grades 3, 4, 6, and 7 using sample test events built using 2015–16 operational data. This study provided evidence that the items within ELA/literacy and mathematics test events for grades 3, 4, 6, and 7 were well aligned to the state standards in terms of both content and cognitive complexity.

4.13 2021-22 Summative Item Pool

This section describes the 2021-22 summative item pool.

Each grade’s item pool is large enough to support the summative blueprint. Unlike a traditional paper/pencil test where all students take the same items, students taking the CAT take items and tasks targeted to their ability level. This means that the Consortium needs to develop a large number of items to deliver tests that simultaneously meet the blueprint and are at a level of difficulty that is tailored to each student’s performance.

In addition to the items for the CAT, the Consortium also developed performance tasks. All students take performance tasks designed to measure a student’s ability to integrate knowledge and skills across multiple claims and assessment targets. Prior to 2018–19, each ELA/literacy performance task had a set of related stimuli presented with two or three research items and an essay. Beginning with the 2018–19 assessment, the performance task includes only one research item, and the reduction is compensated by including more research items in the CAT component. Each mathematics performance task continues to have four to six items relating to a central problem or stimulus. The PT items are organized into distinct sets that are delivered intact to students. The number of PT item sets per grade and subject in the 2021-22 summative assessment is shown in Table 4.3. The sets are delivered in randomized fashion to students rather than adaptively.

Table 4.3: NUMBER OF PERFORMANCE TASKS BY GRADE AND SUBJECT
Grade ELA/literacy Mathematics
3 25 17
4 29 21
5 29 23
6 22 16
7 30 17
8 33 16
11 29 14

The distribution of item parameters by grade and claim are shown in Table 4.4 (ELA/literacy) and Table 4.5 (mathematics). Note that there is a wide range of difficulty in each category. This enables the CAT algorithm (described previously in this chapter) to find the best items for each student. As such, adaptive tests provide more precise measurement for all levels of student performance than would be provided with a fixed-form test of the same length. This is accomplished through having a bank of previously calibrated items to deliver during the adaptive portion of the test. In addition, fixed, randomly assigned performance tasks add information to student performance.

Table 4.4: ITEM DIFFICULTY (B-PARAMETER) AND DISCRIMINATION (A-PARAMETER), ELA/LITERACY
Grade Claim # of Items b parameter Mean b parameter Min b parameter Max a parameter Mean
3 1 349 -0.574 -2.725 4.69 0.69
3 2 247 -0.833 -2.896 4.12 0.68
3 3 190 -0.200 -2.920 3.81 0.54
3 4 153 -0.267 -2.226 1.86 0.67
3 Total 939 -0.516 -2.920 4.69 0.65
4 1 287 0.170 -2.529 6.23 0.62
4 2 250 -0.426 -3.252 2.94 0.59
4 3 206 0.045 -2.822 4.25 0.55
4 4 163 0.439 -1.939 5.01 0.55
4 Total 906 0.026 -3.252 6.23 0.58
5 1 304 0.638 -1.784 7.52 0.60
5 2 255 -0.046 -2.278 3.29 0.60
5 3 167 0.393 -2.403 3.48 0.53
5 4 154 0.489 -1.494 3.83 0.64
5 Total 880 0.367 -2.403 7.52 0.59
6 1 273 1.021 -1.778 4.78 0.60
6 2 256 0.792 -2.719 5.54 0.56
6 3 164 0.826 -1.497 7.38 0.50
6 4 154 0.942 -1.305 3.61 0.55
6 Total 847 0.900 -2.719 7.38 0.56
7 1 270 1.312 -1.836 6.63 0.57
7 2 234 1.012 -2.019 5.30 0.55
7 3 161 0.869 -1.706 5.88 0.50
7 4 134 1.596 -0.815 5.61 0.55
7 Total 799 1.182 -2.019 6.63 0.55
8 1 221 1.542 -0.989 6.42 0.59
8 2 251 1.087 -3.013 4.56 0.53
8 3 211 0.893 -2.119 3.87 0.48
8 4 141 1.588 -1.788 5.19 0.56
8 Total 824 1.245 -3.013 6.42 0.54
11 1 914 1.935 -2.087 9.10 0.54
11 2 740 1.758 -1.880 9.14 0.47
11 3 589 1.351 -1.648 6.62 0.45
11 4 388 2.041 -1.197 8.94 0.47
11 Total 2,631 1.770 -2.087 9.14 0.49


Table 4.5: ITEM DIFFICULTY (B-PARAMETER) AND DISCRIMINATION (A-PARAMETER), MATHEMATICS
Grade Claim # of Items b parameter Mean b parameter Min b parameter Max a parameter Mean
3 1 764 -1.137 -4.338 4.16 0.83
3 2 131 -0.536 -2.537 1.38 0.99
3 3 276 -0.191 -2.424 5.12 0.72
3 4 160 -0.202 -2.677 3.20 0.81
3 Total 1,331 -0.769 -4.338 5.12 0.82
4 1 781 -0.266 -3.260 4.48 0.85
4 2 159 -0.039 -2.248 2.57 0.89
4 3 275 0.221 -2.083 5.18 0.75
4 4 177 0.223 -2.148 3.28 0.71
4 Total 1,392 -0.082 -3.260 5.18 0.81
5 1 781 0.333 -2.791 6.20 0.77
5 2 139 0.644 -2.208 3.94 0.89
5 3 271 0.811 -2.120 5.98 0.67
5 4 187 1.121 -1.232 4.63 0.71
5 Total 1,378 0.565 -2.791 6.20 0.75
6 1 765 0.816 -3.934 9.16 0.69
6 2 126 1.073 -2.978 5.50 0.77
6 3 247 1.659 -2.161 8.75 0.59
6 4 134 1.558 -1.851 6.44 0.78
6 Total 1,272 1.083 -3.934 9.16 0.69
7 1 742 1.635 -1.792 7.80 0.73
7 2 135 1.621 -1.417 5.07 0.82
7 3 216 1.919 -1.654 6.59 0.61
7 4 136 2.004 -0.881 4.78 0.75
7 Total 1,229 1.724 -1.792 7.80 0.72
8 1 661 1.960 -1.868 7.75 0.58
8 2 92 2.318 -1.570 5.75 0.74
8 3 183 2.572 -1.520 9.02 0.50
8 4 116 2.124 -2.844 6.48 0.68
8 Total 1,052 2.116 -2.844 9.02 0.59
11 1 1,889 2.321 -4.432 8.72 0.61
11 2 195 2.926 -1.101 6.68 0.63
11 3 452 2.957 -1.049 9.25 0.47
11 4 211 3.173 0.161 6.38 0.55
11 Total 2,747 2.534 -4.432 9.25 0.58

The Consortium develops many different types of items beyond the traditional multiple-choice item. This is done to measure claims and assessment targets with varying degrees of complexity by allowing students to respond in a variety of ways rather than simply recognizing a correct response. These different item types and their abbreviations are listed in Table 4.6. The frequencies of item types by claim within grade and subject are shown in Table 4.7 and Table 4.8. Note that each essay written is associated with two items. Essays are scored on three traits, two of which are combined, resulting in two scores for each essay.

Table 4.6: ITEM TYPES FOUND IN THE SUMMATIVE ITEM POOLS
Item Types ELA/literacy Mathematics
Multiple Choice (MC) X X
Multiple Select (MS) X X
Evidence-Based Selected Response (EBSR) X
Match Interaction (MI) X X
Hot Text (HTQ) X
Short Answer Text Response (SA) X X
Essay/Writing Extended Response (WER) X
Equation Response (EQ) X
Grid Item Response (GI) X
Table Interaction (TI) X


Table 4.7: DISTRIBUTION OF ELA/LITERACY ITEM TYPES BY GRADE AND CLAIM
Grade Claim EBSR HTQ MC MI MS SA WER Total
3 1 49 52 199 0 49 0 0 349
3 2 0 48 120 0 54 0 25 247
3 3 48 0 82 20 40 0 0 190
3 4 0 21 69 9 41 13 0 153
3 Total 97 121 470 29 184 13 25 939
4 1 50 55 133 0 49 0 0 287
4 2 0 46 130 0 45 0 29 250
4 3 50 0 96 21 39 0 0 206
4 4 0 17 69 8 52 17 0 163
4 Total 100 118 428 29 185 17 29 906
5 1 59 53 128 0 64 0 0 304
5 2 0 43 121 0 62 0 29 255
5 3 42 0 78 19 28 0 0 167
5 4 0 21 63 6 49 15 0 154
5 Total 101 117 390 25 203 15 29 880
6 1 41 60 95 0 49 28 0 273
6 2 0 49 94 0 66 25 22 256
6 3 43 0 78 19 24 0 0 164
6 4 0 12 70 5 52 15 0 154
6 Total 84 121 337 24 191 68 22 847
7 1 37 53 105 0 51 24 0 270
7 2 0 44 83 0 58 19 30 234
7 3 45 0 71 13 32 0 0 161
7 4 0 32 43 7 30 22 0 134
7 Total 82 129 302 20 171 65 30 799
8 1 37 41 70 0 47 26 0 221
8 2 0 36 89 0 69 24 33 251
8 3 26 0 140 5 40 0 0 211
8 4 0 30 50 10 31 20 0 141
8 Total 63 107 349 15 187 70 33 824
11 1 158 176 257 0 220 103 0 914
11 2 0 181 233 0 249 48 29 740
11 3 110 0 326 18 135 0 0 589
11 4 0 83 166 15 101 23 0 388
11 Total 268 440 982 33 705 174 29 2,631
All Total 795 1,153 3,258 175 1,826 422 197 7,826


Table 4.8: DISTRIBUTION OF MATHEMATICS ITEM TYPES BY GRADE AND CLAIM
Grade Claim EQ GI MC MI MS SA TI Total
3 1 471 70 120 69 4 0 30 764
3 2 85 17 16 6 4 2 1 131
3 3 15 70 100 28 37 26 0 276
3 4 59 19 32 10 13 10 17 160
3 Total 630 176 268 113 58 38 48 1,331
4 1 418 86 97 167 0 0 13 781
4 2 99 15 31 7 5 0 2 159
4 3 23 84 77 21 37 31 2 275
4 4 51 20 61 7 11 16 11 177
4 Total 591 205 266 202 53 47 28 1,392
5 1 412 48 222 99 0 0 0 781
5 2 97 13 17 2 4 1 5 139
5 3 17 68 93 26 26 38 3 271
5 4 68 36 30 6 6 22 19 187
5 Total 594 165 362 133 36 61 27 1,378
6 1 354 76 66 105 147 0 17 765
6 2 81 14 8 3 11 2 7 126
6 3 23 52 61 38 46 26 1 247
6 4 69 11 10 3 15 13 13 134
6 Total 527 153 145 149 219 41 38 1,272
7 1 412 51 67 81 131 0 0 742
7 2 94 8 10 7 13 0 3 135
7 3 31 47 50 24 42 22 0 216
7 4 71 27 17 5 11 1 4 136
7 Total 608 133 144 117 197 23 7 1,229
8 1 257 48 162 91 83 0 20 661
8 2 58 12 6 6 2 0 8 92
8 3 18 53 35 20 34 23 0 183
8 4 50 23 15 9 8 6 5 116
8 Total 383 136 218 126 127 29 33 1,052
11 1 706 297 444 320 116 0 6 1,889
11 2 101 39 24 12 13 0 6 195
11 3 56 142 128 54 46 25 1 452
11 4 100 30 42 13 13 7 6 211
11 Total 963 508 638 399 188 32 19 2,747
All Total 4,296 1,476 2,041 1,239 878 271 200 10,401

Although there is a wide distribution of item difficulty, pools tend to be difficult in relation to the population and to the cut score that is typically associated with proficiency (the level 3 cut score). Figure 4.3 shows mean item difficulty, level 3 cut score, and mean student achievement scores (all in theta units) by grade and subject. The mean item difficulty and student achievement plotted in this figure are based on the 2021-22 assessment.

Comparison of Item Difficulty, Mean, Student Scores, and Cut Scores for ELA/Literacy and Mathematics

Figure 4.3: Comparison of Item Difficulty, Mean, Student Scores, and Cut Scores for ELA/Literacy and Mathematics

4.14 Blueprint Fidelity

Whether the tests students receive in Smarter Balanced assessments satisfy the blueprint requirements described earlier in this chapter depends on two basic elements of test design: 1) the computer adaptive test (CAT) algorithm and 2) the item pool. The CAT algorithm endorsed by Smarter Balanced is publicly available (Cohen & Albright, 2014) and is used by Cambium Assessment to deliver Smarter Balanced assessments in the majority of member states. Key features of the item pool are described in the preceding section and include the number of items in specific areas of the blueprint (such as claims) and their distribution in difficulty relative to the distribution of student achievement. This section presents results from blueprint fidelity analyses carried out with 2021-22 operational assessment data to examine how well Smarter Balanced assessments satisfied the full or adjusted (shortened) blueprints.

Analyses were performed for both ELA/literacy and mathematics and in all the tested grade levels (3–8 and high school). For ELA/literacy, blueprint fulfillment was evaluated separately for three populations at each grade level: the general student population, the braille student population, and the American Sign Language (ASL) population. For mathematics, blueprint fulfillment was evaluated separately for five populations at each grade level: the general student population, the braille student population, the ASL student population, the Spanish student population, and the translated glossaries population. Only operational items from the computerized adaptive test (CAT) component were considered in this study; field test items and performance task (PT) component items were not included.

For each population of students within grade and content area, both Smarter Balanced full and adjusted (shortened) blueprint fulfillment was evaluated at the following levels of detail:

  1. Claims. The blueprint specifies the number of items per claim and, for ELA/literacy, the number of items associated with informational vs. literary texts in claim 1. In mathematics, the blueprint fidelity analysis combines claims 2 and 4 since these claims are combined for purposes of subscore reporting.
  2. Targets or target groups within claims. The number of items per target group is specified in the blueprint.
  3. Depth of Knowledge (DOK) requirements. In both ELA/literacy and mathematics, the blueprint specifies the number of items that must represent a given DOK level or higher within certain categories of the blueprint, such as claims and target groups.

The analyses showed that the operational tests delivered in the 2021-22 administration fulfilled the blueprint requirements very well. The average percentages of blueprint met across all students, grades, and blueprint evaluations were usually over 90%. The exception was the mathematics adjusted (shortened) blueprint for braille students Table 4.9. Populations with no data are not shown in the table.

Table 4.9: MEAN PERCENTAGE OF BLUEPRINT MET IN GENERAL AND SPECIAL POPULATIONS
Blueprint Content Area Group Category Evaluations Mean Blueprint Met
Full ELA/L General Claim 35 99.0%
Full ELA/L General Target 87 99.3%
Full ELA/L General DOK 39 99.5%
Full ELA/L American Sign Language Claim 30 100.0%
Full ELA/L American Sign Language Target 74 100.0%
Full ELA/L American Sign Language DOK 34 100.0%
Full Math General Claim 32 93.1%
Full Math General Target 114 99.8%
Full Math General DOK 33 99.9%
Full Math American Sign Language Claim 32 93.8%
Full Math American Sign Language Target 114 99.7%
Full Math American Sign Language DOK 33 99.0%
Full Math Spanish Claim 32 93.0%
Full Math Spanish Target 114 99.1%
Full Math Spanish DOK 33 98.9%
Full Math Translated Glossaries Claim 27 95.5%
Full Math Translated Glossaries Target 89 98.9%
Full Math Translated Glossaries DOK 27 98.8%
Adjusted ELA/L General Claim 35 99.1%
Adjusted ELA/L General Target 97 99.6%
Adjusted ELA/L General DOK 39 99.6%
Adjusted ELA/L American Sign Language Claim 35 99.4%
Adjusted ELA/L American Sign Language Target 97 99.7%
Adjusted ELA/L American Sign Language DOK 39 99.7%
Adjusted ELA/L Braille Claim 35 99.5%
Adjusted ELA/L Braille Target 97 98.8%
Adjusted ELA/L Braille DOK 39 98.4%
Adjusted Math General Claim 32 92.7%
Adjusted Math General Target 114 100.0%
Adjusted Math General DOK 33 99.9%
Adjusted Math American Sign Language Claim 32 92.7%
Adjusted Math American Sign Language Target 114 99.9%
Adjusted Math American Sign Language DOK 33 99.8%
Adjusted Math Braille Claim 32 86.5%
Adjusted Math Braille Target 114 96.4%
Adjusted Math Braille DOK 33 96.9%
Adjusted Math Spanish Claim 32 92.7%
Adjusted Math Spanish Target 114 99.8%
Adjusted Math Spanish DOK 33 99.7%
Adjusted Math Translated Glossaries Claim 27 96.1%
Adjusted Math Translated Glossaries Target 89 99.5%
Adjusted Math Translated Glossaries DOK 27 99.8%

Deviations from blueprint requirements, though rare, are investigated by Smarter Balanced. For purposes of future item development, Smarter notes the few combinations of requirements that were met by fewer than 90% of tests. These cases are more likely to occur for combinations of claims, targets, and DOK requirements and within certain grades and accommodations pools. They indicate the possibility of systematic shortages or surpluses of items in some areas of the blueprint that should be addressed through item development. The possibilities that the CAT algorithm should be adjusted or that the blueprint is more restrictive than necessary and should be modified are also considered. Also considered are the sample sizes for some groups, such as students who take the braille test, which can be quite small and cause blueprint fidelity percentages to fall below a certain threshold, such as 90%, by chance.

Practical, logistical constraints in test delivery that are not accommodated in the blueprint can also lead to minor deviations between the blueprints and the tests actually delivered to students. The mathematics CAT session in grades 6 and higher is a mixture of calculator and non-calculator items. For logistic reasons, the CAT session is therefore partitioned into calculator and non-calculator segments, which are sequential. The blueprint does not specify the number of items nor any other details pertaining to each segment. This lack of specificity can occasionally lead to a blueprint distribution of items in the first segment that cannot be balanced in complementary fashion in the second segment, such that both segments combined meet the blueprint. This and other similar issues will have to be addressed in the future by one or more of the following: 1) increasing the specificity of goals in item development, 2) modifying the CAT algorithm, or 3) modifying the test blueprints.

4.15 Item Exposure

Item exposure, like test blueprint fidelity, is a function of the item pool and CAT algorithm, which are basic features of test design. Hence, information about item exposure is included in this chapter on test design. Item exposure rates were obtained using online and adaptive test instances with valid scale scores for which item data were available from the 2021-22 summative administration. The exposure rate for a given item is the proportion of test instances in the grade and content area on which the item appeared.

Table 4.10 and Table 4.11 present a summary of the item exposure results for ELA/literacy and mathematics, respectively. Within each grade and component (CAT and PT), both tables present the number of items in the operational pool (N), along with various descriptive statistics, including the mean, standard deviation (SD), range (min, max), and median of the observed exposure rates. For example, Table 4.10 shows that, on average, each CAT item eligible for administration at grade 3 was seen by 3% of grade 3 examinees. As a rule of thumb, Smarter Balanced attempts to maintain a maximum exposure rate of 25% (i.e., no more than 25% of examinees will see the same item). Table 4.10 shows that the mean and median exposure rates for ELA/literacy items are well below 25%. Table 4.11 shows that the mean and median exposure rates for mathematics items are also well below 25%. Patterns of item exposure for PT items will differ from those for CAT items due to the fact that PT item sets are randomly selected and administered within grade, whereas CAT items are administered adaptively.

Table 4.12 and Table 4.13 provide further information about item exposure by showing the number of and proportion of items in the operational pool (N) with exposure rates falling into certain ranges (bins with a width of 0.1), including those that were completely unexposed (unused). Due to rounding and the large number of items per grade within subject, values of 0.00 in these tables does not necessarily mean that no items had exposure rates falling into the ranges indicated by the column headings. Table 4.12 and Table 4.13 show that exposure for the vast majority of items was between 0 and 10% (0.0–0.1].

Table 4.10: SUMMARY OF ELA/LITERACY ITEM EXPOSURE RATES BY GRADE AND COMPONENT
Grade Type N Mean SD Min Max Median
3 CAT 889 0.03 0.02 0.00 0.25 0.02
4 CAT 848 0.03 0.03 0.00 0.20 0.02
5 CAT 822 0.03 0.03 0.00 0.26 0.02
6 CAT 803 0.03 0.04 0.00 0.27 0.02
7 CAT 739 0.03 0.04 0.00 0.28 0.02
8 CAT 758 0.03 0.04 0.00 0.24 0.02
HS CAT 2573 0.01 0.01 0.00 0.10 0.01
3 PT 50 0.04 0.01 0.03 0.04 0.04
4 PT 58 0.03 0.01 0.02 0.04 0.04
5 PT 58 0.03 0.01 0.02 0.04 0.04
6 PT 44 0.04 0.01 0.03 0.05 0.04
7 PT 60 0.03 0.00 0.02 0.03 0.03
8 PT 66 0.03 0.00 0.02 0.03 0.03
HS PT 58 0.03 0.00 0.03 0.03 0.03


Table 4.11: SUMMARY OF MATHEMATICS ITEM EXPOSURE RATES BY GRADE AND COMPONENT
Grade Type N Mean SD Min Max Median
3 CAT 1236 0.02 0.02 0.00 0.10 0.01
4 CAT 1286 0.02 0.02 0.00 0.18 0.01
5 CAT 1255 0.02 0.02 0.00 0.14 0.01
6 CAT 1180 0.02 0.02 0.00 0.18 0.01
7 CAT 1143 0.02 0.03 0.00 0.14 0.01
8 CAT 973 0.02 0.03 0.00 0.28 0.01
HS CAT 2676 0.01 0.02 0.00 0.24 0.00
3 PT 95 0.05 0.00 0.05 0.06 0.05
4 PT 106 0.04 0.00 0.04 0.05 0.04
5 PT 123 0.04 0.00 0.04 0.04 0.04
6 PT 92 0.06 0.00 0.05 0.06 0.06
7 PT 86 0.05 0.00 0.05 0.06 0.05
8 PT 79 0.06 0.00 0.06 0.06 0.06
HS PT 71 0.07 0.00 0.05 0.07 0.07


Table 4.12: PROPORTION OF ELA/LITERACY ITEMS BY EXPOSURE RATES
Grade Type N Unused (0.0, 0.1] (0.1, 0.2] (0.2, 0.3] (0.3, 0.4] (0.4, 0.5] (0.5, 0.6] (0.6, 0.7] (0.7, 0.8] (0.8, 0.9] (0.9, 1.0]
3 CAT 889 0.012 0.97 0.0157 0.002 0 0 0 0 0 0 0
4 CAT 848 0.005 0.98 0.0153 0.000 0 0 0 0 0 0 0
5 CAT 822 0.006 0.98 0.0073 0.002 0 0 0 0 0 0 0
6 CAT 803 0.031 0.92 0.0423 0.006 0 0 0 0 0 0 0
7 CAT 739 0.022 0.91 0.0663 0.007 0 0 0 0 0 0 0
8 CAT 758 0.020 0.91 0.0633 0.003 0 0 0 0 0 0 0
HS CAT 2573 0.029 0.97 0.0008 0.000 0 0 0 0 0 0 0
3 PT 50 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
4 PT 58 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
5 PT 58 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
6 PT 44 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
7 PT 60 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
8 PT 66 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0
HS PT 58 0.000 1.00 0.0000 0.000 0 0 0 0 0 0 0


Table 4.13: PROPORTION OF MATHEMATICS ITEMS BY EXPOSURE RATES
Grade Type N Unused (0.0, 0.1] (0.1, 0.2] (0.2, 0.3] (0.3, 0.4] (0.4, 0.5] (0.5, 0.6] (0.6, 0.7] (0.7, 0.8] (0.8, 0.9] (0.9, 1.0]
3 CAT 1236 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
4 CAT 1286 0.000 1.00 0.004 0.0000 0 0 0 0 0 0 0
5 CAT 1255 0.000 1.00 0.005 0.0000 0 0 0 0 0 0 0
6 CAT 1180 0.005 0.98 0.013 0.0000 0 0 0 0 0 0 0
7 CAT 1143 0.003 0.97 0.026 0.0000 0 0 0 0 0 0 0
8 CAT 973 0.004 0.96 0.029 0.0031 0 0 0 0 0 0 0
HS CAT 2676 0.001 0.99 0.007 0.0007 0 0 0 0 0 0 0
3 PT 95 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
4 PT 106 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
5 PT 123 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
6 PT 92 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
7 PT 86 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
8 PT 79 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0
HS PT 71 0.000 1.00 0.000 0.0000 0 0 0 0 0 0 0

4.16 Summary of Test Design

The intent of this chapter is to show how the assessment design supports the purposes of Smarter Balanced summative assessments. Content specifications were derived directly from the state standards, expressing the standards as measurable elements and made explicit in Smarter Balanced claims and assessment targets structure. Building on the content specifications, test blueprints provide appropriate proportions of state standards content coverage. Using the blueprints, item writers wrote items and tasks in quantities that supported CAT and performance task delivery. Expansion of item and task types promoted student responses that provide more insight into proficiency than that provided by multiple-choice items alone. The use of performance tasks addresses the need to assess application and integration of skills. The method of delivery and test scoring, combining adaptive and non-adaptive elements, provides the most precise information and an enhanced student testing experience. The 27 major types of assessment design specifications are summarized in Appendix B.

The measurement properties summarized in Chapter 2 and in the sections of this chapter on item exposure and blueprint fidelity are very much functions of the item pool and CAT algorithm. The CAT algorithm has not substantially changed since it was first used in the 2014–15 summative assessment. Details of this algorithm are available in a separate report (Cohen & Albright, 2014). Details concerning the item pool are provided in this chapter. The item pool used in the 2021-22 summative assessment was large and, although relatively difficult compared to the students assessed, supported the delivery of reliable CAT tests, met blueprint requirements, and did not overexpose items. These outcomes support the conclusion that the 2021-22 summative assessment was well designed.

References

American Institutes for Research. (2013). Cognitive laboratories technical report.
Cohen, J., & Albright, L. (2014). Smarter balanced adaptive item selection algorithm design report.
Dana, T. M., & Tippins, D. J. (1993). Considering alternative assessments for middle level learners. Middle School Journal, 25(2), 3–5.
Doorey, N., & Polikoff, M. (2016). Evaluating the content and quality of next generation assessments. In Thomas B. Fordham Institute. Retrieved from https://eric.ed.gov/?id=ED565742.
Hansen, E. G., & Mislevy, R. J. (2008). Design patterns for improving accessibility for test takers with disabilities (pp. i–32) [ETS Research Report]. https://doi.org/10.1002/j.2333-8504.2008.tb02135.x
HumRRO. (2016). Smarter Balanced Assessment Consortium: Alignment Study Report. Retrieved from https://portal.smarterbalanced.org/library/smarter-balanced-assessment-consortium-alignment-study-report/.
Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6–20.
Mislevy, R. J., Steinberg, L. S., & Almond, R. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3–67.
Rose, D., & Meyer, A. (2000). Universal design for learning. Journal of Special Education Technology, 15, 67–70.
Schmeiser, C. B., & Welch, C. J. (2006). Test development. In R. L. Brennan (Ed.), Educational measurement, 4th ed. American Council on Education/Praeger.
Smarter Balanced. (2015b). Item and Task Specifications. Retrieved from http://www.smarterbalanced.org/assessments/development/.
Smarter Balanced. (2015c). Style Guide. Retrieved from https://portal.smarterbalanced.org/library/style-guide-for-smarter-balanced-assessments/.
Smarter Balanced. (2016a). 2013-2014 Technical Report. Retrieved from https://portal.smarterbalanced.org/library/2013-14-technical-report/.
Smarter Balanced. (2016b). Accessibility and Accommodations Framework. Retrieved from https://portal.smarterbalanced.org/library/accessibility-and-accommodations-framework/.
Smarter Balanced. (2017b). English Language Arts/Literacy Content Specifications. Retrieved from https://portal.smarterbalanced.org/library/english-language-artsliteracy-content-specifications/.
Smarter Balanced. (2017d). Mathematics Content Specifications. Retrieved from https://portal.smarterbalanced.org/library/mathematics-content-specifications/.
Smarter Balanced. (2022a). Bias and Sensitivity Guidelines. Retrieved from https://portal.smarterbalanced.org/library/bias-and-sensitivity-guidelines/.
Smarter Balanced. (2023c). Usability, Accessibility, and Accommodations Guidelines. Retrieved from https://portal.smarterbalanced.org/library/usability-accessibility-and-accommodations-guidelines/.
WestEd Standards, Assessment, and Accountability Services Program. (2017). Evaluation of the alignment between the common core state standards and the smarter balanced assessment consortium summative assessments for grades 3, 4, 6, and 7 in English language arts/literacy and mathematics. Retrieved from https://portal.smarterbalanced.org/library/wested-alignment-evaluation/.
Zhang, T., Haertel, G., Javitz, H., Mislevy, R. J., & Wasson, J. (2009). A design pattern for a spelling assessment for students with disabilities. Paper presented at the annual conference of the American Psychological Association, Montreal, Canada.

  1. There are a small number of ELA/literacy passages that may possibly be administered with the full blueprint but not the adjusted (shortened) blueprint due to implementation details related to target requirements.↩︎