Guiding Questions for MSL Systems:
Measure Selection

Individual and Collective Attribution
The S.B. 10-191 rules also require that teachers’ evaluations include both an individually attributed and collectively attributed measure of student learning.

Have you ensured that your district decisions about the relative weight of individual and collective measures are defensible and reflect your district’s values?

  • How might the division of individual and collective attribution motivate educators to prioritize certain activities? Are those the activities you want educators to focus on?
  • Has your district ensured that collective measure(s) are meaningful for all educators?
  • What evidence do you have to suggest that the relative weights of the individual and collective measures do not artificially adjust the overall MSL /Standard VI rating either higher or lower (e.g., by heavily weighting a collective measure where every educator is assured a particular rating)?

    • Have you also considered how inflated or deflated ratings on MSLs may impact the accuracy of overall educator ratings?
Selecting Weights and Measures
Selecting weights and measures for inclusion in an MSL system is critically important in that it demonstrates your priorities and values. These decisions also provide an opportunity to communicate expectations about student learning and teaching. Step 2 of CDE’s Measures of Student Learning guidance provides directions for selecting and weighting measures.

How has your district ensured that the selection of measures, and determinations about their associated weights, were made purposefully and in alignment with your goals and values?

  • Do the relative weights of the MSL measures used for each educator reflect the instructional priorities for that grade and content area?
    • What is the minimum number of measures (i.e., pieces of the pie) you feel you need to achieve a comprehensive body of evidence?
    • How have you balanced the need to have enough measures to give a comprehensive view of each educator’s work with students without slicing the pie into too many small pieces?
  • Have you considered potential duplicity in measure selection? That is, are you inadvertently using the same assessment or measure more than once [e.g., SPF growth ratings and school reading median growth percentiles (MGPs), both of which are based on CMAS reading growth percentiles]?
    • If you are using an assessment or measure more than once, does the total weight reflect the value of that assessment within the philosophy of your district?
  • How have you accounted for issues arising from cycles of instruction and assessment schedule(s) when selecting specific measures?
    • Do assessment timelines match cycles of instruction in order to enable the provision of meaningful information to teachers?
    • Will the measures selected have results that are available two weeks prior to the last day of school?
  • Do you have protocols or feedback loops in place to determine whether your MSL results are consistent with other measures of educational success in your district? For instance, you could examine the correlation between MSLs and Professional Practices or how the results of assessments used in your MSLs correlate with other assessment results.
Target Setting
For a measure to be included in the MSL system, districts should thoughtfully approach target setting — and consider students’ baseline performance — to evaluate whether student academic growth is much lower than expected, lower than expected, expected, or higher than expected (as outlined in CDE’s Measures of Student Learning guidance).

Are your targets reasonable and defensible?

  • What baseline data can you use to assist in setting targets?
    • How are you ensuring that the source of baseline data is appropriate and comparable across like content and grade level groups?
    • When compared to available baseline data, are the targets set for each educator ambitious but achievable?
    • Do your targets ensure that students will make significant learning gains, sufficient to progress to proficiency or other performance standards established through the Colorado Academic Standards, local standards, or other adopted learning goals (e.g., through a 504 plan or formal IEP)?
  • Are the measurement and statistical methods used to set targets and evaluate performance appropriate for your context, the available data, and the analytical capacity of your district?
    • Does your district have the technical skills or analytical capacity to execute planned analyses and/or growth calculations? If not, can you identify other approaches that are less methodologically rigorous but still provide meaningful information to educators about student and teacher performance?
    • Have you considered consulting other districts or outside providers (e.g., institutes of higher education or technical assistance providers) for assistance in creating or evaluating your targets
    • Can you communicate the targets and analyses to educators in accessible, understandable ways
    • Have you considered the number of students (i.e., n-size) in your district — both overall and assigned to each educator — to determine whether there are methodological limitations based on n-size?
  • Does your district have a process to evaluate the defensibility and rigor of targets once they have been set? If not, is there a plan in place to develop such a process?
 

Facebook Twitter LinkedIn Share on Google+
Back to Top