# Chapter 2 Reliability, Precision, and Errors of Measurement

## 2.1 Introduction

This chapter addresses the technical quality of the interim assessments available in the 2018-19 school year and the functioning of these assessments in terms of expected precision, accuracy, and reliability. Part of the test validity argument is that scores must be consistent and precise enough to be useful for intended purposes. If scores are to be meaningful, tests should deliver the same results for students of the same ability regardless of the specific items each student takes. Interim assessments are fixed forms, but each form is a sample of the same content sampled by the summative assessments, and this allows interim results to be placed on the same scale as summative results. In addition, the range of uncertainty around the score should be small enough to support educational decisions.

Because states do not routinely collect or report item responses on the interim assessments, estimates of precision and reliability are based on statistical attributes of the test items and test forms under the assumption that the interim assessments are administered to groups of students similar to those taking the 2018-19 summative assessment.

It is also important to note that although a test’s reliability is sometimes characterized as yielding similar results over hypothetically repeated administrations, in practice, a student’s test performance may improve over repeated administrations due solely to the student becoming more familiar with the test, especially if the same items are used. For interim assessments, it is not possible to give the test more than once to the same student without the experience affecting their performance on the test. For this reason, only first-time test results from interim assessments are comparable among students. The results of repeated administration of the same interim assessment should be interpreted with caution with regard to measuring student growth. Also, interim results are comparable among only those students who are assessed at the same point in time relative to instruction on the knowledge and skill areas represented by the test—usually either just before or just after instruction.

## 2.2 Precision and Reliability

This section presents the methodology used to compute the precision and reliability of student scale scores on the interim assessments and summarizes results for overall scale scores on the ICAs. The methodology is also relevant to computing scale scores on IABs and on reporting categories (claims) of the ICAs, which may be represented in student reports. But student performance on the IABs and ICA claims is represented primarily by classifications into performance categories. Section 2.3 explains classifications into performance categories, presents the methodology for computing the accuracy and consistency of such classifications, and presents associated summaries for IABs and ICA claim scores.

A test’s precision is represented by its measurement error, which is called the standard error of measurement (SEM) for an individual student. The SEM for a given student depends on the student’s achievement score. This dependence gives rise to the notation SEM($$\theta_{i}$$), which means “the SEM for a student whose achievement is represented by the quantity $$\theta_{i}$$,” where $$i$$ is a number representing the student. The $$\theta$$–scale is an Item Response Theory (IRT) scale and generally ranges from -4 (extremely low achievement) to +4 (extremely high achievement) with a mean of zero. Ultimately, measures of achievement and SEMs on the $$\theta$$ scale are transformed to the reporting scale as described in the Smarter Balanced Scoring Specifications (Smarter Balanced, 2016b).

The formula for the standard error of measurement for a student $$i$$ whose achievement estimate is $$\hat\theta_{i}$$, is:

$$$SEM(\hat{\theta_i}) = \frac{1}{\sqrt{I(\hat{\theta_i})}}, \tag{2.1}$$$

where $$I(\theta_{i})$$ is the test information for student $$i$$, which is based on the items taken by the student and calculated as:

$$$\begin{split} I(\hat{\theta}_{i}) = \sum_{j=1}^{I}D^2a_{j}^2 (\frac{\sum_{l=1}^{m_{j}}l^2Exp(\sum_{k=1}^{l}Da_{j}(\hat{\theta}-b_{jk}))} {1+\sum_{l=1}^{m_{j}}Exp(\sum_{k=1}^{l}Da_{j}(\hat{\theta}-b_{jk}))} - \\ (\frac{\sum_{l=1}^{m_{j}}lExp(\sum_{k=1}^{l}Da_{j}(\hat{\theta}-b_{jk}))} {1+\sum_{l=1}^{m_{j}}Exp(\sum_{k=1}^{l}Da_{j}(\hat{\theta}-b_{jk}))})^2), \end{split} \tag{2.2}$$$

where $$m_j$$ is the maximum possible score point (starting from 0) for the $$j$$th item, and $$D$$ is the scale factor, 1.7. Values of $$a_j$$ and $$b_jk$$ are item parameters for item $$j$$ and score level $$k$$.

Figure 2.1 to Figure 2.6 show the $$I(\theta_{i})$$ and SEM($$\theta_{i}$$) for ICAs in each content area and grade, conditional on student proficiency over the range of -4 to +4. The shading in these plots shows the distribution of $$\theta$$ in the population of students for each content area and grade. The means and standard deviations of these population $$\theta$$ distributions are shown in Table 2.1.

Table 2.1: PROFICIENCY POPULATION PARAMETERS
Grade ELA/Literacy Mean ELA/Literacy SD Mathematics Mean Mathematics SD
3 -0.908 1.030 -1.067 1.115
4 -0.437 1.074 -0.557 1.162
5 -0.085 1.133 -0.177 1.226
6 0.123 1.207 0.048 1.306
7 0.362 1.285 0.307 1.391
8 0.534 1.363 0.494 1.475
HS 0.858 1.461 0.844 1.580

The measurement precision of the ICAs for students having the $$\theta$$ distributions represented in Table 2.1 (and illustrated in Figure 2.1 to Figure 2.6) is represented by the marginal reliability coefficient and the root mean squared error (RMSE). These indices are shown in Table 2.2 (ELA/literacy) and Table 2.3 (mathematics). The reliability coefficient is:

$$$\hat{\rho} = 1 - \frac{MSE}{var(\hat{\theta})}, \tag{2.3}$$$

where $$var(\hat{\theta})$$ is the population variance of true scores. The square of the SD in Table 2.1 was used for $$var(\hat{\theta})$$. The MSE is explained below. The reliability of a test is partly a function of its precision and partly a function of true differences in ability among students. A reliability coefficient of 0 indicates that measured differences among students are completely unreliable. A reliability coefficient of 1 indicates that the measured differences, or rank order, among students is completely reliable.

The reliability coefficient, $$\hat{\rho}$$, was calculated for the overall score and claim scores for the ICAs. Reliability and RMSE are not reported for the assessment blocks of the IABs because scale scores are not reported. IAB results are reported in terms of whether the student is below, near, or above standard, where the level 3 cut score is the standard. For computing the reliability of claim scores, student measures of true “claim” achievement were assumed to have the same $$\theta$$ distribution as overall student achievement.

The mean squared error (MSE), is the average of ($$[SEM(\theta_{i})]^{2}$$) for a given $$\theta$$ distribution. The RMSE is the square root of this average. Simulation was used to estimate the MSE. For each grade within subject, true $$\theta$$ values for 1,000 examinees were simulated from a normal distribution with mean and SD equal to the values shown in Table 2.1. Then for each value of true $$\theta$$, scores for the items on the test under study were generated using the test items’ IRT parameters. An estimate of $$\theta$$ was then obtained via maximum likelihood estimation and the generated score vector. The data simulation and scoring were carried out with flexMIRT software (Cai, 2017). For a given test, the MSE was then estimated as:

$$$MSE = N^{-1}\sum_{i=1}^N (\hat{\theta_i}-\theta_i)^2, \tag{2.4}$$$

where N=the number of simulated examinees (1,000).

As expected, reliability coefficients for the ICA are high and the RMSEs are small and in the acceptable range for a large-scale test.

Reliability estimates are lower and SEM is higher for the ICA claim-level scores. Claims with fewer items and fewer points exhibit the lowest reliability and the highest SEM.

Table 2.2: RELIABILITY AND PRECISION FOR INTERIM COMPREHENSIVE ASSESSMENTS, ELA/LITERACY
Grade Full Test Nitems Full Test Reliability Full Test RMSE Claim 1 Reliability Claim 1 RMSE Claim 2 Reliability Claim 2 RMSE Claim 3 Reliability Claim 3 RMSE Claim 4 Reliability Claim 4 RMSE
3 48 0.93 0.28 0.84 0.46 0.81 0.50 0.65 0.76 0.62 0.81
4 49 0.93 0.30 0.84 0.48 0.79 0.56 0.58 0.91 0.70 0.70
5 48 0.93 0.30 0.84 0.50 0.80 0.57 0.66 0.82 0.73 0.69
6 49 0.94 0.32 0.82 0.56 0.84 0.54 0.64 0.90 0.69 0.81
7 49 0.94 0.34 0.85 0.54 0.83 0.58 0.61 1.02 0.62 1.00
8 50 0.95 0.33 0.88 0.51 0.83 0.61 0.58 1.16 0.76 0.77
11 46 0.93 0.41 0.83 0.67 0.78 0.78 0.68 1.01 0.69 0.99
Table 2.3: RELIABILITY AND PRECISION FOR INTERIM COMPREHENSIVE ASSESSMENTS, MATHEMATICS
Grade Full Test N items Full Test Reliability Full Test RMSE Claim 1 Reliability Claim 1 RMSE Claim 2 Reliability Claim 2 RMSE Claim 3 Reliability Claim 3 RMSE
3 37 0.93 0.31 0.87 0.44 0.69 0.76 0.72 0.70
4 36 0.92 0.33 0.86 0.46 0.75 0.67 0.57 1.01
5 37 0.92 0.37 0.84 0.54 0.69 0.82 0.73 0.74
6 36 0.92 0.39 0.86 0.54 0.65 0.96 0.67 0.91
7 37 0.91 0.43 0.85 0.59 0.69 0.93 0.70 0.90
8 37 0.92 0.45 0.85 0.62 0.75 0.85 0.67 1.04
11 38 0.91 0.50 0.84 0.69 0.71 1.02 0.64 1.19

## 2.3 Classification Accuracy

Classification accuracy is defined as the degree of consistency between the observed achievement level (from the observed scores) and the true achievement level (from the population distribution). To calculate the classification accuracy, a simulation study was conducted using item-level information and information about the population parameters (mean and standard deviation). The simulation study allows us to understand classification accuracy without having student-level data at hand. First, true scores for 1,000 simulees were generated from the mean and standard deviations shown in Table 2.1. Then, responses from the simulees to the items in the fixed forms (IABs and ICAs) were generated using the parameters and item response models used in the scoring of these items. From these simulated item responses, scale scores, standard errors, and achievement-level classifications were obtained, according to the Smarter Balanced Scoring Specifications (Smarter Balanced, 2016b). Correct classification by level was computed as the proportion of students among those assigned to a particular level whose true achievement level (based on the simulated true score) and assigned achievement level (based on the estimated score) matched. The overall correct classification rate is the proportion of students among those assigned to any level who are correctly assigned. For the claim scores and interim assessment blocks (IABs), we assume that the true claim or IAB scores are equivalent to the true overall scores. Therefore, we use the true overall score as the true claim or IAB score in calculating correct classification rates.

For overall scores, we used a weighted Kappa to describe the accuracy of classifications into the four achievement levels. Claim and IAB scores were evaluated with respect to the cut score between levels 2 and 3, which represents the minimum standard for proficiency for the subject and grade level. For each claim, students are classified as “above” or “below” the standard when the estimated score is at least 1.5 standard errors above or below the cut score. When the estimated score is within 1.5 standard errors, the student is classified as “near” the standard. Claim or IAB scores with larger average standard errors can thus be expected to have a greater proportion of students classified as “near” the standard. Because such classifications cannot be treated as a misclassification (“near” is only defined in terms of the standard error of measurement), the proportions correctly classified focus on those students who were classified as “above” or “below.”

Table 2.4 shows the cut scores used for classifying examinees into achievement levels based on their overall test performance. The level 2/3 cut score is also used to classify students by their performance on items specific to a claim—by their claim scores—and their performance on the IABs.

Table 2.4: CUT SCORES FOR ACHIEVEMENT LEVELS
Grade Subject Level 2 Level 3 Level 4
3 ELA/literacy -1.646 -0.888 -0.212
4 ELA/literacy -1.075 -0.410 0.289
5 ELA/literacy -0.772 -0.072 0.860
6 ELA/literacy -0.597 0.266 1.280
7 ELA/literacy -0.340 0.510 1.641
8 ELA/literacy -0.247 0.685 1.862
11 ELA/literacy -0.177 0.872 2.026
3 Mathematics -1.689 -0.995 -0.175
4 Mathematics -1.310 -0.377 0.430
5 Mathematics -0.755 0.165 0.808
6 Mathematics -0.528 0.468 1.199
7 Mathematics -0.390 0.657 1.515
8 Mathematics -0.137 0.897 1.741
11 Mathematics 0.354 1.426 2.561

Table 2.5 and Table 2.6 show the classification accuracy for the IAB scores in ELA/literacy and mathematics for all grades. Each table shows the proportion of simulees assigned to each category with respect to the level 3 cut score standard and the proportion among those assigned to each category and overall who were correctly classified.

Table 2.5: OVERALL LEVEL CLASSIFICATION ACCURACY FOR INTERIM ASSESSMENT BLOCKS, ELA/LITERACY
Grade Block Name Prop Assigned Below Standard Prop Assigned Near Standard Prop Assigned Above Standard Prop Correctly Clasified Below Standard Prop Correctly Classified Above Standard
3 Brief Writes 0.000 0.944 0.056 NA 1.000
Editing 0.257 0.510 0.233 0.965 1.000
Performance Task 0.000 0.963 0.037 NA 1.000
Language and Vocabulary Use 0.236 0.506 0.258 0.983 0.992
Listen/Interpret 0.241 0.506 0.253 0.975 0.988
Read Literary Texts 0.252 0.443 0.305 0.984 1.000
Read Informational Texts 0.231 0.509 0.260 0.978 0.992
Research 0.274 0.453 0.273 0.971 0.993
Revision 0.245 0.493 0.262 0.988 0.996
4 Brief Writes 0.165 0.815 0.020 0.703 1.000
Editing 0.234 0.552 0.214 0.970 0.981
Performance Task 0.000 0.932 0.068 NA 1.000
Language and Vocabulary Use 0.247 0.510 0.243 0.988 0.988
Listen/Interpret 0.221 0.575 0.204 0.964 0.980
Read Informational Texts 0.177 0.571 0.252 0.983 0.992
Read Literary Texts 0.266 0.489 0.245 0.970 1.000
Research 0.258 0.481 0.261 0.973 0.996
Revision 0.219 0.546 0.235 0.963 0.996
5 Brief Writes 0.238 0.762 0.000 0.710 NA
Editing 0.214 0.517 0.269 0.986 0.985
Performance Task 0.272 0.705 0.023 0.812 1.000
Language and Vocabulary Use 0.252 0.514 0.234 0.972 1.000
Listen/Interpret 0.228 0.550 0.222 0.982 0.995
Read Literary Texts 0.177 0.555 0.268 0.977 0.985
Read Informational Texts 0.153 0.620 0.227 0.974 0.991
Research 0.266 0.448 0.286 0.996 0.990
Revision 0.247 0.503 0.250 0.972 1.000
6 Brief Writes 0.051 0.928 0.021 0.824 1.000
Editing 0.228 0.554 0.218 0.969 0.977
Performance Task 0.247 0.734 0.019 0.757 1.000
Language and Vocabulary Use 0.269 0.521 0.210 0.989 0.986
Listen/Interpret 0.273 0.493 0.234 0.978 0.987
Read Literary Texts 0.255 0.548 0.197 0.976 0.985
Read Informational Texts 0.264 0.506 0.230 0.989 0.991
Research 0.300 0.425 0.275 0.983 0.993
Revision 0.246 0.580 0.174 0.967 0.977
7 Brief Writes 0.264 0.736 0.000 0.742 NA
Editing 0.160 0.673 0.167 0.969 0.976
Performance Task 0.232 0.734 0.034 0.707 1.000
Language and Vocabulary Use 0.257 0.499 0.244 0.961 0.988
Listen/Interpret 0.265 0.527 0.208 0.981 0.966
Read Literary Texts 0.294 0.446 0.260 0.990 0.992
Read Informational Texts 0.286 0.473 0.241 0.983 0.992
Research 0.207 0.549 0.244 0.986 0.984
Revision 0.266 0.505 0.229 0.966 0.983
8 Brief Writes 0.228 0.772 0.000 0.741 NA
Edit/Revise 0.295 0.510 0.195 0.969 0.974
Performance Task 0.378 0.615 0.007 0.680 1.000
Listen/Interpret 0.278 0.528 0.194 0.975 0.974
Read Informational Texts 0.246 0.494 0.260 0.980 0.992
Read Literary Texts 0.329 0.409 0.262 0.985 0.977
Research 0.286 0.467 0.247 0.986 0.976
11 Brief Writes 0.376 0.624 0.000 0.676 NA
Editing 0.220 0.524 0.256 0.959 0.992
Performance Task 0.155 0.845 0.000 0.755 NA
Language and Vocabulary 0.249 0.496 0.255 0.968 0.992
Listen/Interpret 0.276 0.491 0.233 0.975 1.000
Read Informational Texts 0.220 0.540 0.240 0.964 1.000
Read Literary Texts 0.240 0.501 0.259 0.988 0.996
Research 0.235 0.452 0.313 0.979 0.997
Revision 0.256 0.483 0.261 0.965 0.996

Table 2.6: OVERALL LEVEL CLASSIFICATION ACCURACY FOR INTERIM ASSESSMENT BLOCKS, MATHEMATICS
Grade Block Name Prop Assigned Below Standard Prop Assigned Near Standard Prop Assigned Above Standard Prop Correctly Classified Below Standard Prop Correctly Classified Above Standard
3 Geometry 0.239 0.638 0.123 0.946 0.984
Measurement and Data 0.316 0.412 0.272 0.987 0.989
Number and Operations – Fractions 0.254 0.494 0.252 0.980 0.988
Number and Operations in Base Ten 0.266 0.456 0.278 0.977 0.986
Operations and Algebraic Thinking 0.328 0.465 0.207 0.988 0.990
Performance Task 0.000 0.827 0.173 NA 1.000
4 Performance Task 0.000 0.931 0.069 NA 1.000
Geometry 0.000 0.814 0.186 NA 0.973
Measurement and Data 0.208 0.504 0.288 0.971 0.979
Number and Operations in Base Ten 0.330 0.467 0.203 0.985 0.985
Number and Operations – Fractions 0.356 0.411 0.233 0.983 0.991
Operations and Algebraic Thinking 0.336 0.440 0.224 0.997 0.996
5 Geometry 0.269 0.596 0.135 0.985 0.941
Measurement and Data 0.303 0.468 0.229 0.987 0.978
Number and Operations in Base Ten 0.330 0.467 0.203 0.979 0.975
Number and Operations – Fractions 0.394 0.384 0.222 0.987 0.977
Operations and Algebraic Thinking 0.310 0.506 0.184 0.977 0.967
Performance Task 0.000 0.878 0.122 NA 0.984
6 Performance Task 0.000 0.881 0.119 NA 0.992
Expressions and Equations 0.404 0.400 0.196 0.988 0.990
Geometry 0.211 0.567 0.222 0.981 0.991
The Number System 0.377 0.419 0.204 0.989 0.990
Ratios and Proportional Relationships 0.371 0.463 0.166 0.987 0.982
Statistics and Probability 0.210 0.636 0.154 0.976 0.961
7 Performance Task 0.008 0.832 0.160 0.875 0.988
Expressions and Equations 0.362 0.437 0.201 0.978 0.995
Geometry 0.057 0.686 0.257 1.000 0.973
The Number System 0.320 0.480 0.200 0.981 0.975
Ratios and Proportional Relationships 0.282 0.471 0.247 0.986 0.980
Statistics and Probability 0.266 0.491 0.243 0.977 0.963
8 Performance Task 0.346 0.654 0.000 0.910 NA
Expressions & Equations I 0.371 0.466 0.163 0.973 0.975
Expressions & Equations II 0.395 0.498 0.107 0.947 1.000
Functions 0.338 0.434 0.228 0.994 0.965
Geometry 0.243 0.548 0.209 0.984 0.986
Grade8Interim TheNumberSystem 0.234 0.573 0.193 0.987 0.969
11 Algebra and Functions II – Quadratic Functions, Equations and Inequalities 0.245 0.550 0.205 0.996 0.976
Algebra and Functions I – Linear Functions, Equations, Inequalities 0.430 0.475 0.095 0.984 1.000
Geometry 0.281 0.496 0.223 0.996 0.978
Geometry Congruence 0.165 0.691 0.144 0.976 0.958
Geometry Measurement and Modeling 0.000 0.865 0.135 NA 0.978
Interpreting Functions 0.314 0.474 0.212 0.984 0.958
Number and Quantity 0.327 0.493 0.180 0.982 0.978
Seeing Structure in Expressions and Polynomial Expressions 0.380 0.440 0.180 0.995 0.978
Statistics and Probability 0.293 0.617 0.090 0.969 0.933
Performance Task 0.000 0.930 0.070 NA 0.986

Table 2.7 and Table 2.8 show the accuracy of the interim comprehensive assessments (ICA) for classifying students into achievement levels (L1 to L4) based on students’ overall test performance. Each table shows the proportion of simulees assigned to each achievement level as well as the proportion who were correctly classified. For example, a proportion of 0.29, or 29%, of the simulated student cases for the grade 3 ELA/literacy ICA were assigned to achievement level 1 (L1). Of these, a proportion of 0.86, or 86%, were truly at achievement level 1 based on the values of the thetas used for them in the simulation.

Table 2.7: OVERALL LEVEL CLASSIFICATION ACCURACY FOR INTERIM COMPREHENSIVE ASSESSMENTS, ELA/LITERACY
Grade Prop Assigned L1 Prop Assigned L2 Prop Assigned L3 Prop Assigned L4 Prop Correctly Classified L1 Prop Correctly Classified L2 Prop Correctly Classified L3 Prop Correctly Classified L4 Overall kappa
3 0.28 0.29 0.25 0.18 0.80 0.60 0.56 0.97 0.71 0.88
4 0.33 0.23 0.26 0.18 0.80 0.57 0.59 0.97 0.72 0.89
5 0.33 0.24 0.30 0.13 0.79 0.56 0.68 0.94 0.72 0.88
6 0.36 0.28 0.28 0.08 0.75 0.55 0.60 1.00 0.67 0.85
7 0.39 0.25 0.29 0.06 0.75 0.50 0.61 1.00 0.66 0.84
8 0.34 0.27 0.32 0.06 0.82 0.59 0.61 1.00 0.70 0.86
11 0.29 0.30 0.32 0.09 0.79 0.54 0.54 1.00 0.65 0.83

Table 2.8: OVERALL LEVEL CLASSIFICATION ACCURACY FOR INTERIM COMPREHENSIVE ASSESSMENTS, MATHEMATICS
Grade Prop Assigned L1 Prop Assigned L2 Prop Assigned L3 Prop Assigned L4 Prop Correctly Classified L1 Prop Correctly Classified L2 Prop Correctly Classified L3 Prop Correctly Classified L4 Overall kappa
3 0.33 0.24 0.30 0.13 0.87 0.63 0.62 0.96 0.75 0.90
4 0.29 0.27 0.26 0.17 0.82 0.76 0.66 0.88 0.77 0.90
5 0.33 0.31 0.18 0.17 0.88 0.67 0.58 0.91 0.76 0.89
6 0.34 0.29 0.20 0.16 0.88 0.76 0.62 0.93 0.80 0.92
7 0.34 0.26 0.22 0.19 0.88 0.72 0.64 0.87 0.79 0.91
8 0.40 0.26 0.20 0.14 0.81 0.63 0.53 0.95 0.73 0.89
11 0.41 0.29 0.21 0.09 0.85 0.63 0.63 0.94 0.75 0.88

Table 2.9 and Table 2.10 show the classification accuracy for ICA claim scores. These tables show the proportion of simulees assigned to each category of achievement with respect to the level 3 cut score (standard)—below, near, or above—and for each of the “above” and “below” categories, the proportion of those assigned to that category whose thetas were truly above or below the standard.

Table 2.9: CLAIM LEVEL CLASSIFICATION ACCURACY FOR INTERIM COMPREHENSIVE ASSESSMENTS, ELA/LITERACY
Claim Grade Prop Assigned Below Prop Assigned Near Prop Assigned Above Prop Correctly Classified Below Prop Correctly Classified Above
1 3 0.409 0.542 0.049 0.910 1.000
4 0.181 0.648 0.171 0.961 0.965
5 0.423 0.519 0.058 0.903 1.000
6 0.216 0.598 0.186 0.991 0.962
7 0.523 0.435 0.042 0.893 1.000
8 0.494 0.457 0.049 0.891 1.000
11 0.261 0.667 0.072 0.862 0.986
2 3 0.169 0.649 0.182 0.976 0.984
4 0.372 0.531 0.097 0.903 1.000
5 0.296 0.411 0.293 0.990 0.997
6 0.530 0.447 0.023 0.892 1.000
7 0.086 0.784 0.130 0.977 0.977
8 0.345 0.536 0.119 0.974 0.992
11 0.367 0.591 0.042 0.924 1.000
3 3 0.045 0.807 0.148 0.933 0.993
4 0.317 0.393 0.290 0.994 0.979
5 0.214 0.604 0.182 0.953 0.967
6 0.123 0.695 0.182 0.959 0.984
7 0.206 0.624 0.170 0.971 0.953
8 0.225 0.775 0.000 0.960 NA
11 0.307 0.465 0.228 0.974 0.991
4 3 0.310 0.419 0.271 0.981 0.985
4 0.106 0.717 0.177 0.972 0.994
5 0.241 0.600 0.159 0.963 1.000
6 0.350 0.442 0.208 0.980 0.995
7 0.370 0.394 0.236 0.986 0.996
8 0.391 0.351 0.258 0.982 1.000
11 0.241 0.573 0.186 0.979 0.968

Table 2.10: CLAIM LEVEL CLASSIFICATION ACCURACY FOR INTERIM COMPREHENSIVE ASSESSMENTS, MATHEMATICS
Claim Grade Prop Assigned Below Prop Assigned Near Prop Assigned Above Prop Correctly Classified Below Prop Correctly Classified Above
1 3 0.389 0.351 0.260 0.982 0.996
4 0.410 0.300 0.290 0.998 0.990
5 0.412 0.354 0.234 0.993 0.983
6 0.428 0.335 0.237 0.998 0.983
7 0.414 0.311 0.275 0.990 0.989
8 0.476 0.321 0.203 0.973 0.995
11 0.448 0.482 0.070 0.944 1.000
2 3 0.000 0.719 0.281 NA 1.000
4 0.334 0.529 0.137 0.973 0.993
5 0.149 0.699 0.152 0.987 0.987
6 0.166 0.681 0.153 0.982 0.987
7 0.259 0.489 0.252 0.977 0.968
8 0.453 0.438 0.109 0.962 0.991
11 0.420 0.383 0.197 0.998 0.975
3 3 0.192 0.715 0.093 0.958 1.000
4 0.000 0.857 0.143 NA 0.979
5 0.453 0.496 0.051 0.940 1.000
6 0.097 0.728 0.175 0.969 0.989
7 0.290 0.648 0.062 0.986 1.000
8 0.271 0.650 0.079 0.967 0.962
11 0.237 0.716 0.047 0.979 0.957