CDAS Research

Classroom Diagnostic Assessment System CDAS™

Foundational Research and Supporting Evidence


In 2002, Robert Marzano published his book, “What Works in Schools: Translating Research into Action” (ASCD). In it he organized and conducted a meta-analysis of 35 years worth of educational research on student achievement. His analysis led to the identification of the major factors affecting student achievement. These factors were organized into three categories (1) school level factors, (2) teacher level factors, and (3) student level factors.

School level factors are those that deal with school policy and school-wide decisions and initiatives. The curriculum, school environment, collegiality and professionalism are all examples of school level factors. The most important of these factors impacting student achievement is, “a guaranteed and viable curriculum.”

To understand CDAS™, one must understand what Marzano meant by the concept of “guaranteed and viable.” According to Marzano a guaranteed and viable curriculum is contingent upon (1) time and the (2) opportunity to learn (OTL). Teachers need sufficient time to teach the standards students will be held accountable to learn. For most schools (including AUSL) this has two implications. The first is to reduce the number of standards (aka “benchmarks”) teachers must cover before the ISAT so they have time to teach (not simply cover) key standards. Marzano defines viable as being realistic, realistic in terms of the number of standards to be covered and the time it takes to teach for mastery. The second is to assure that teachers align lessons and activities to these key standards. According to Marzano, “OTL has the strongest relationship with student achievement of all school-level factors.”

Opportunity to Learn (OTL) was first introduced more than 30 years ago by the International Association for the Evaluation of Educational Achievement. The concept basically recognizes that there exists three curricula, not one, in schools. They are identified as the 1) written/intended, 2) taught/implemented, and 3) attained/assessed curricula. The written curriculum is whatever is stated in district or state documents. The taught curriculum is what teachers do or actually teach. The assessed curriculum is what is actually assessed in the classroom by the teacher. The degree of alignment between these three curricula is directly related to OTL.

Written

Assessed                                                    Taught

Opportunity to Learn (OTL) = Degree of Alignment

 

Although, ideally there would be a perfect overlap (see diagram) of the three curricula, this is rarely the case. Misalignment is more often the norm. One source of misalignment between the written and taught curricula can be traced back to how a teacher interprets a standard. Less experienced teachers will often superficially interpret a standard because they lack content knowledge and experience with curricular materials. More veteran teachers will often embellish and add to a standard because they infer connections. Other teachers might religiously follow an adopted text without questioning whether a lesson actually addresses the standards. This is often born of the belief that an adopted text is aligned to their particular state standards. This is a naive assumption since textbooks are developed for the national market rather than individual states. It only follows that there will be misalignment regardless of what a publisher may proclaim to boards of education.

Another discrepancy arises in how a standard is assessed in the classroom. More often than not, both teacher and publisher-made tests do not adequately reflect what is intended in the written standard. This is particularly true when one analyzes the level of cognitive rigor of the questions found on either a teacher-made test or a publisher’s item bank. Research in this area has revealed the majority (87%) of teachers’ questions (whether written, asked, and/or assigned) only require lower order thinking. Whereas 60-75% of the questions on state and national tests are purposely written at higher orders of Bloom’s Taxonomy, or ETS (Educational Testing Service) guidelines. So there is significant disconnect between the level of rigor seen in classrooms and the level of rigor implied by the written curriculum as reflected in state tests. Therefore, students are denied an opportunity to learn at the same cognitive level at which they are being tested.

Each of the discrepancies described, can impact a student’s opportunity to master a standard. Marzano found that 78% of all schools in this country could trace their achievement issues to a lack of opportunity to learn. In other words, what was written ultimately did not match what was taught or what was assessed in the classroom.

CDAS™ increases OTL by increasing alignment between the three curricula. It accomplishes this outcome three ways. First, to eliminate ambiguity in state standards/benchmarks, each benchmark is “unpacked” to provide greater clarity in the written curriculum. Teachers do not have to guess or infer what they should be teaching at their particular grade level. The unpacked benchmark identifies the knowledge, conceptual understandings, skills and reasoning (higher order thinking) required, though not explicitly stated by the benchmark. The unpacked benchmark becomes a tool teachers can use to make more strategic instructional decisions. Secondly, to ensure alignment between the written and assessed curriculum, a sample formative diagnostic assessment was developed. The questions on the diagnostic were written at three cognitive levels and mirror the design criteria used by ETS and ISAT. This ensures, teachers have a model of the types of higher order questions students will encounter of state tests. Lastly, once the unpacked benchmark and assessment tools are created, they can be used to analyze publisher materials for alignment. Teachers can fill in the gaps with supplemental materials thereby ensuring that students have an opportunity to learn the material, at the level of rigor expected.

CDAS™ is also supported by the research conducted by Doug Reeves and described in “Making Standards Work.” Reeves conducted his research on the 90-90-90 schools (90% poverty-90% minority-90% achieving at or above national norms). His insights on identifying key (aka essential, power) standards, unpacking, establishing performance bands and frequent formative assessment is at the heart of CDAS™. While state and district benchmark (aka interim) tests provide a “broad” picture of student achievement for groups and schools, these tests can not and do not provide classroom teachers with timely data on individual students or individual benchmarks. CDAS™ was designed to fill this void. The work of Stiggins, Leahy, and Schmoker, along with Reeves, support the use of frequent, formative daily assessment as the key to improving student achievement. CDAS™ provides teachers with a bank of model questions that can be used to assess students throughout the instructional process. As a tool, CDAS™ serves a variety of teaching functions. The tools can be used to inform the planning process, provide teachers with questions that can be used to check for understanding, determine mastery, and diagnose student learning needs. The data can serve as the focus for analysis and problem solving among grade level teams or what Richard Dufour’s terms “professional learning communities” (PLC). Used effectively, CDAS™ tools improve instructional alignment, increase the opportunity to learn and results in increased student performance.

 

How Teachers Feel about CDAS™

ALL GRADE LEVELS, NATIONWIDE IMPLEMENTATION (n>750)

Strongly Agree or Agree

Having a CDAS™ system has helped me become more effective as a teacher.

98%

Student data from a CDAS™ tool is helpful to me.

98%

I used the unpacked standards (or benchmarks or objectives) to help me plan my lessons.

90%

I find the pacing guide that has been developed with the CDAS™ tools to be helpful in keeping my class aligned to other classrooms in the school.

59%

I used the CDAS™ tools to help me determine when a student has mastered a particular objective or benchmark.

96%

 

How Teachers are Using CDAS™

ALL GRADE LEVELS, NATIONWIDE IMPLEMENTATION

Sometimes or Always

To evaluate my textbook materials and identify where I might need to supplement.

79%

To assess the effectiveness of my lessons and determine areas for re-teaching.

96%

To help me formulate higher order questions and activities for my students.

79%

To help me identify the needs of specific groups of students so I can differentiate my instruction.

86%

To help me determine what instructional objectives I should teach.

91%