Accountability in a Competency-Based System

By Elliott Asp and Rebecca Holmes   |  April 30, 2018

At CEI, we define competency-based education (CBE) as a system in which each student moves through school and graduates based on their demonstrated mastery of a transparent set of knowledge, skills, attitudes and behaviors required for success after high school that is commonly understood by students, teachers and parents. A key component of CBE is a personalized approach that promotes student ownership of learning by tailoring educational experiences to students’ strengths, needs, and interests. Underlying CBE is the fundamental belief that students and teachers have control over the learning process – that is, students can direct their own learning and teachers are empowered to adjust and revise their practice based on student progress. When CBE is implemented well, student agency and ownership for their own learning skyrockets.

CEI’s ongoing commitment to CBE comes from the belief that steps toward a competency-based system are critical if we wish to evolve past the factory model that, despite heroic efforts of many educators, is still evident in the underlying design of the American education system. Additionally, we have partners in Colorado who are excited about CBE because of the opportunity it presents to shift who determines readiness of a student to move on from high school. For example, many districts/schools are developing capstone projects that require students to present their work to demonstration committees that include industry and community members. This has the power to drive a “real world” definition of readiness into the system.

CBE, Continuous Improvement, and State Ratings: Are They Working at Cross Purposes?

To ensure that every student is postsecondary and workforce ready and to address specific inequities in student achievement, several pioneering districts/schools across Colorado have been cautiously and deliberately moving in this direction. As the momentum for CBE is building and more districts and schools are implementing elements of CBE, they are also realizing that our current state accountability system does not support this work. The design of the current accountability system is rooted in a traditional view of education where students move through school based on “seat-time” and grade levels. This means that all students in a particular grade take the same state test within the same time frame during the school year. Juxtapose this against a CBE system, where a student’s grade level takes a back seat to their progress in mastering specific standards/goals. How far back that back seat is depends on each local system’s approach.  Some schools may still loosely use grade levels while in others, students may be classified by their current achievement level in specific subjects rather than by their age and how long they have been in school. An accountability system that is designed to support CBE and a more personalized approach would enable students to take the state test that is aligned with their achievement level at a time when they (and their teachers) feel they are ready to demonstrate their learning. Certainly, there would be guardrails in this system that reflect the reality that time toward mastery is different and less high stakes in earlier grades than in later grades.

The fundamental purpose of a CBE approach is to meet all kids where they are and move them towards readiness for life, learning, and work beyond high school by providing the support and motivation each student needs and by giving them clear, accurate data about their own developing skills and competencies. An accountability system that supports CBE would reflect the emphasis on progress toward mastery. The Colorado growth model was an attempt to provide a measure of individual student progress that could be aggregated across a school and district and be included in the school/district rankings, but districts who are redesigning their work around CBE report finding this insufficient.

At the heart of this issue is an inherent conflict between the two major purposes of our educational accountability system. We want to rate the performance of schools and districts and identify those in need of intervention while at the same time fostering a culture of continuous improvement that promotes student and teacher ownership of learning. Both these purposes are important. However, the emphasis in our current system is clearly on ranking and intervention at the expense of continuous improvement. In prior EdPapers in this series we’ve addressed this at the school and system level, but the incongruence with CBE is more stark at the individual student level. A student being labeled “proficient” on 8th grade standards they potentially had mastery over years earlier tells that student and his family so much less than what a competency-based system could about his progress toward rigorous and personally relevant goals and postsecondary and workforce readiness (PWR).

Comparison vs. Improvement

Comparing the performance of districts and schools by assigning them some sort of ranking or rating is a high-stakes process, especially for those whose rating may cause them to face state sanctions and/or loss of local control. This means the data and process used to assign ratings must be highly standardized with little room for differences across districts and schools. Given the small number of schools and districts that face sanctions, we have created a system that restrains innovation and progress for all schools in order to identify and target the few who are truly struggling. That makes it difficult to adjust state testing levels and times based on student progress or to make other changes that would be supportive of a CBE system.

The most useful data for continuous improvement – both for schools and for students – is timely, actionable, and based on the needs of specific students and/or interests and needs of the district/school, and is based on a variety of leading indicators. The data from our current accountability system does not meet those requirements for several reasons:

  • The data is not timely – the ratings based on spring testing are not released until well into the next school year.
  • The data are not actionable, there is only general/high-level feedback on student performance.
  • The data are not relevant to local needs and interests.
  • Most of the data come from school/district performance on one indicator (the state test) and because of the emphasis on high stakes summative purposes, there is little opportunity to include other measures, that while less standardized, provide insight in to root causes for school and district performance and reflect local interests and needs.

In our February EdPaper on the Student-Centered Accountability Project (S-CAP), we documented the value local districts put on information from less formal sources such as feedback from students, parents, and teachers, as well as classroom observations. Leaders and teachers own it and feel empowered and responsible for using it to improve – that is, they have agency for that data. While the process/parameters for gathering this kind of information varies to some degree across the S-CAP districts, the data is still relatively accurate, reliable and, most importantly, useful for continuous improvement purposes. This is because this data is not used for high stakes decisions. The S-CAP process is an example of the kind of support and incentives for continuous improvement that our accountability system should provide. Currently, districts and schools are engaging in this kind of work despite the system rather than as part of the accountability process. In fact, most districts/schools pay little attention to the ratings from the current system unless they are subject to state intervention.

Support for Continuous Improvement

While pilots are emerging across the country, no state has yet moved to an accountability system that is aligned with a CBE approach. Doing so is a complicated question that demands a robust statewide discussion of the purpose, the value, and the unintended consequences of school accountability systems. Below we pose a few ideas to start this conversation.

  1. Use the state assessment for the purposes it was designed for – to provide a common measure of student achievement.
  2. Stop trying to use the state assessment/ for purposes it was NOT designed for – to provide actionable information for continuous improvement for students, schools and districts.
  3. Use a combination of local and state common performance assessments to inform teacher judgement about student progress towards PWR that teachers, schools, and districts could administer based on the needs and readiness of students.
  4. Consider using teacher judgment (informed by common local and state assessments) to determine student achievement of standards and use the large-scale state test to calibrate and vet teacher judgment.
  5. Incentivize districts to engage in “SCAP-like” site reviews by providing protocols, processes and other resources.
  6. Imagine a different or baseline system for evaluating and rating schools that have perpetually struggled to serve all students that doesn’t purport to communicate the quality of all schools.

To support these suggestions, we could:

  • Make the state test much shorter (NAEP-like) and use a sampling process so every student does not have to take the state test every year.
  • Use an adaptive approach to state testing so students could take the state assessment at their level.
  • Develop growth projections that would enable the state, districts, schools, and most importantly students, to track progress towards PWR.
  • Pilot different versions of a site review process to investigate how this might be scaled at a regional or state-wide level.

Of course, this is not an exhaustive list of things we could explore and there are many other potential revisions/improvements to our current accountability system that would make it more supportive of CBE and continuous improvement. Most importantly, we need to establish a learning agenda around accountability that could lead to thoughtful changes in the system, wide-scale practitioner and public support for the process, and a healthier, more useful balance between the high stakes purposes of ranking, rating, and punishing and continuous improvement. In the course of writing our accountability series over the last few months, we have found ourselves in the middle of a quiet debate. Practitioners in the field are eager for this learning agenda because they have experienced the unintended consequences of our current system and seen the immense amount of state time and resources directed toward maintaining it. Advocates, however, worry that publicly asking questions about the current system risks losing years of political progress toward transparent data on schools. This chasm and disconnect is one we must address if we want education policy in our state to remain an effective tool for moving us toward equitable outcomes for students and public investment in education.

Since January, CEI has engaged the education community in an ongoing dialogue about Colorado’s accountability system. We believe there is an exciting opportunity at hand to revisit our priorities and examine the limitations of our current system with an innovator’s mindset. Each month, we have published an EdPaper that takes a deeper look at the subject of school accountability. We invite you to be part of the conversation and share your thoughts with us via email at accountability@coloradoedinitiative.org.

JANUARY EdPaper: The Nature of Accountability is Ready for Change

FEBRUARY EdPaper: S-CAP (Student-Centered Accountability Project) and its Implications for Local Measures

MARCH EdPaper: High School Accountability

APRIL EdPaper: Accountability in a Competency-Based World

MAY EdPaper: Diverse Voices, Current Voices: Elevating the experience of students of color and of recent graduates

JUNE EdCast: School Accountability learnings and takeaways

We invite you to be part of the dialogue. Email your thoughts to join the conversation!

Download a PDF version of this EdPaper