To measure the effectiveness of this process, Arts Impact collects assessment data on a teacher-by-teacher and student-by-student basis.
During the Summer Institute, Arts Impact collects performance-based assessment results of teacher learning for each lesson to measure success of transfer of knowledge from Artist Mentors to teachers. In addition, teachers assess themselves on a sample of the lessons. This assures inter-rater reliability of the assessment instrument and teacher accuracy in understanding of the criteria.
During the school year, Arts Impact collects performance-based assessment results for every student in the program to measure success of transfer of knowledge from teacher to students. Throughout the mentorship, students participate in four to five arts or arts-infused lessons, each of which includes assessment. Arts Impact collects assessment data from the final, teacher-written lesson of the mentorship. Artist Mentors and teachers score students independently, then compare scores to ensure reliability and validity.
Teacher & Student Learning Assessment Strategies
The following performance-based assessment strategies are modeled and implemented throughout the program for both teacher and student learning:
- Criteria-based checklists
- Criteria-based rubrics
- Peer assessment
- Responding to the work of others
- Evidence of learning: art works, performances, presentations, photographs, video
The Autonomy Rubric for Teachers (Autonomy Rubric for Teachers (ART)) measures teacher autonomy in teaching and infusing the arts through 22 strands across three categories: planning, teaching, and assessing. In the first year of training, Artist Mentors use the rubric during final teacher conferences to highlight teacher strengths and areas for improvement. In the second year of training, teachers also complete the ART as a self-evaluation tool.
Additionally, Arts Impact tracks growth in the frequency, approaches, and confidence of teachers in using the arts.
This data is gathered in periodic teacher surveys through a combination of rating scales and open-ended responses. Here are some examples of the surveys:
Program Effectiveness & Fidelity
Arts Impact seeks feedback from all its constituencies to meet the ever-changing arts education needs of students, teachers, and schools. Data gathering tools and embedded program practices both support program quality.
Teachers & Administrators
Surveys and focus groups provide qualitative data on the effectiveness of each of the training components. This information guides Arts Impact to incorporate improvements in program implementation, resources, and evaluation.
Tracking teacher completion rates for each program component identifies program fidelity and insures that teachers complete all elements.
Report out meetings held after Summer Institutes and Mentorships provide the Artist Mentors, Arts Curriculum & Assessment Advisor, and administrative staff with the opportunity to reflect together on program implementation, data, and assessment results. These sessions often lead to program improvements.
Annual professional development for Artist Mentors and staff includes topics that inform the overall teaching practice of the Arts Impact staff. Recent staff professional development workshops include Cultural Competency and Relevance, elementary and middle school math, teaching English Language Learners, Theory of Change, and Writer’s Workshop. These professional development days also provide opportunities for team building and collegial networking.
Collegial sharing of knowledge and practice is a key component of Arts Impact Artist Mentor teaching. The curriculum is written by the Artist Mentors, vetted within their discipline teams, and taught by every Artist Mentor. This practice assures clarity and transparency, which helps classroom teachers to successfully implement the curriculum as well.
Educators from each of the cultural partner organizations participate in report out meetings, annual professional development, and program planning to insure their feedback and input is included in the evaluation of program effectiveness.