Standardized Testing Shortcomings


Shortcomings of Standardized Testing

The Short Version

ConceptualMath will close in 2019.  This page will be archived

Has your child been mislabeled as a result of standardized testing? Has your school lowered its standards, or eliminated important extracurricular activities, because of standardized testing? What should you know? What should you do?

Since I first wrote the evaluation of a specific standardized test, I have been contacted by teachers and parents. They know that the test has problems. They want to know what information will help them make their case and what they might be able to do. This is the short version.



The Short Discussion of Testing Problems

Although this list is specific to MAP Testing, the problems listed will apply to many other standardized tests.

Accuracy Problems

  • MAP is based on the assumption that all students learn information in the same order, and all schools teach information in the same order. The RIT scores are no more accurate than these assumptions. (detail)
  • MAP levels are based on academic levels (curriculum sequencing), not cognitive levels (Bloom's Taxonomy). A high RIT score can represent low level cognition in advanced curriculum material. Thus, MAP is not a good measure for gifted performance, and should not be used to guide decisions for gifted programs. (more)
  • MAP does not provide information as to why the student is having difficulty with specific material. MAP does not even provide specific information as to what material students need work on. As a result, MAP does not provide useful information that can guide teachers in meeting student's real needs. (More & explained)

Precision Problems

  • All decisions based on test results should consider the 2-sigma range. NWEA reports sigma to be about 3 (except for high RIT scores where sigma is even larger.) Thus, all decisions should consider the possibility that each reported score could have been 6 point higher, or 6 points lower. Many schools that have retested with MAP have seen evidence that sigma is actually larger than 3. (example & explained)
  • RIT scores are actually determined by averaging together strand scores. Since a typical MAP test has 50 questions and 5 strands, only 10 questions were asked for each strand. Uncertainty for a 10 question multiple choice test is rather large. (example & explained)
  • For high RIT scores (score that are normal for 8th grade and higher) the SEM is larger than a typical year's growth. (More)
    o Thus, MAP is not only unable to identify what skills a student may need, MAP is unable to precisely determine the grade level at which student is performing.
    o Growth scores vary more from testing uncertainty than by student progress.
  • For very high RIT scores, negative growth is more common that positive growth. This implies that either MAP is totally unreliable for high performers, or that MAP testing reinforces educational strategies that are counterproductive for high achievers. (More)

Cost-Benefit Considerations

  • MAP testing is very costly in terms of dollars, instructional time lost, and burdens placed on teaching staff. Wouldn't those resources be better used on something that helps students, or provides teaching staff with precise and accurate information. (More)

Outside links



Parents: What Should You Do

Now that you have reason to believe that testing may have actually lowered your school's standards, or led to unjustified decisions about your child, what should you do? It is your job to make your school demonstrate that their testing is reliable, and it is their job to demonstrate that their decisions recognize the known limits of the tests.

  1. Ask your school for a report on growth scores (changes from previous scores to the recent for a particular subject or grade level.) NWEA's own technical manual shows that for high school, particularly high achievers, scores are more likely to decline than rise. This creates a talking point for you. Is the test unreliable, or has the test documented unsuccessful programs and teaching at your school? Either is a problem that the school should address. (demo)
  2. Ask the school to report successive strand scores from test to test for specific individuals. The fluctuation in strand scores is much higher than the fluctuation in RIT scores. The RIT scores are stabilized by balancing statistically probable opposing stand scores. Ask the school to explain what is meant by the strand score fluctuating so much. (e.g.: did your child's understanding of geometry actually drop significantly, or is the drop in score part of the intrinsic uncertainty in the test.) (Demo)
  3. Ask the school to report the correlation coefficient between the state test scores and the MAP test scores for a specific grade or subject. Request that the report include a graph. Ask them to explain how they best feel the scores that deviate wildly ( i.e.: a student who score much lower on one test than the other) should be interpreted, and what policy implications this should have for dealing with students. (Demo)
  4. Ask your school board to report a cost-benefit analysis of MAP testing. MAP is very expensive for the school. If MAP is not highly reliable (both accurate and precise) then it is not a wise expenditure for the school district. (Demo)
  5. Ask your school to state their official philosophy or mission. If they claim to have a mission of supporting high level cognition (top levels of Bloom's Taxonomy) or if they claim to be expeditionary (project based), ask them to explain how this test actually supports that mission. Testing tends to reinforce low level skills, and discourage high level teaching. (More)
  6. Contact your senator. Standardized testing was mandated by the public school regulatory act of 2001 (NCLB) pushed through by President Bush. It significantly increased the cost of education. Many school responded by spending more time on test prep, less time on real curriculum, and eliminating extracurricular activities. Many evaluators have determined that NCLB has failed to produce real results, and may even be producing negative results.



Teachers: What should you do?

  1. Be honest when parents ask. Do the test scores match what you are observing in the classroom?
  2. When the test scores deviate wildly from what you are observing, report the problem to the administrators. Be sure that you can tell parents that the administrators are aware of the problem.
  3. If your school is considered adopting a standardized test, quiz the salesperson to demonstrate how the score reporting will help you. Give specific common examples from your experience.
    1. "I have students who can solve two-step equations with minimal difficulty, but make frequently confuse the properties for negatives. How will the test report let me know what specific help these students need?" Be sure the testing company can give good answers to these questions.
  4. Ask the test salesperson to demonstrate how the test distinguishes between high level learning and accelerated knowledge.
  5. Recommend assessments that identify specific strengths and weaknesses, as opposed to tests that rank for tracking purposes. (distinction)
  6. Keep copies of data and make your own growth graphs. That is how our school first demonstrated what the teachers suspected that MAP Testing was not reliable enough to provide useful information. (Demo)
  7. Ask teachers at other schools using the same test how reliable their test results were, and how informative the results were.
  8. If you get a bad review based on test results alone, file a grievance. Make the school demonstrate that the test was both precise and accurate enough to justify the results.

Return to