My interest in the Cynefin Framework grew out of my work as an instructional coach first, where I recognized the complexity of teaching reading and writing to diverse learners in classroom settings; and district technology coordinator second, where I saw the complexity of integrating educational technology in different school settings. It wasn’t until I worked at Rangeview High School in APS when Snowden’s criticism of surveys made sense to me. Our school principal, Ron Fay, would discuss with me the futility of trying to interpret results from the state of Colorado’s lengthy statistical Teaching and Learning Conditions Survey (TLCC).
When Snowden, in the clip above excerpted from his address, “How leaders change culture through small actions,” for Academi Wales, the public center in Wales for leadership and management excellence, picks apart all of the possible angles to answering one of IBM’s survey questions, I hear my principal’s frustration with the questions his teachers are asked to answer on the TLCC to assess the climate and culture of Rangeview High.
Snowden responds to the question, “Does your manager consult with you on a regular basis?” by pointing out the complex reality of the job he held at IBM.
“I’ve got several managers. Sometimes they consult me, sometimes they don’t. Sometimes they should, sometimes they shouldn’t,” he says.
Similarly, teachers in Colorado are asked whether they agree with the statement, “Teachers are provided with informal feedback to improve their instruction.”
At my school, this feedback might come from students, other teachers, instructional coaches, district support people, or administrators. While teachers are likely to assume the question is about administrative feedback, teachers work with administrators in a variety of settings and dialogue with them in ways that lead to feedback about their instruction. As with the question asked about IBM management, the reality of the school system renders the data generated by the question ambiguous if not meaningless. To make sense of why 65% of respondents answered positively to this question at my school, leaders first have to determine how teachers interpreted a vague question. When you consider that the survey has roughly 140 questions, many of which are similarly ambiguous, you realize that the data collected to measure teaching and learning conditions, far from being actionable, is just the beginning of a fact-finding mission for school leaders who want to develop actions steps to improve culture.
It is telling that along with the data, the state provides schools with multiple pages of do’s and don’ts so they won’t be tempted to treat this data as informative or actionable. My favorite cautions are pictured below.
I’ve circled the “Do” statement that encourages school leaders to have conversations about the statistics the survey generates in order to develop action steps, as well as the “Don’t” statement that says that the data isn’t really meant to be used at the school level. It is clear that this data system isn’t designed to be actionable or have meaning that can be readily determined by stakeholders. Instead, it is a survey designed by data scientists which is best used for system wide analysis by those experts.
In my experience, this type of data is presented to classroom teachers and school leaders in contexts where they are likely to misinterpret the statistics within.
The benefit of gathering stories is that our skill at making meaning from them is developed over a lifetime of trading in stories. The data we collect using Cognitive Edge’s Sensemaker software generates statistics that add depth to stories rather than statistics in need of stories.
One thought on “Issues with research questionnaires”
Why not use the free alternative http://www.narrafirma.com. Written by the original developer of sensemaker.