Clubs: Learning about testing

For testing clubs this first quarter, we followed this process:

  1. Invite testers. We talked to allies about the opportunity and invited them to join the testing process. Each tester was given the dedicated support of a staff member to ensure they had direct and regular contact with the project.
  2. Kickoff call with testers. We initiated testing with a community call, which we continued to host fortnightly as an important check-in and reflection point. We used Vidyo and etherpad for the calls.
  3. 1:1 Interviews. To better understand our allies needs, we conducted 40+ interviews with them. We collated and analysized the data, which greatly informed our efforts.
  4. Affiliate comparison. In parallel, we also reviewed 10+ other organizations who have a club model or other form of local group organizing. This review gave us best practices to learn from.
  5. Curriculum curation. The testing process was two-part: curriculum curating and curriculum testing. To curate, we developed a curriculum arc (Reading the Web, Writing the Web, and Participating on the Web) and then sought existing activities to fill that out. Where there were gaps, we created or remixed new activities. This work was done on Github to great effect.
  6. Curriculum testing. Every two weeks, our testers were invited to try out the latest curriculum section. We shared reflections and questions in Discourse and used our fortnightly check-in call to discuss our experience and feedback on the sections.
  7. Assessment is hard. We know how important it is for benchmarks. We want to know how effective the curriculum is. We created brief questionnaires in Google Docs and made them part of the testing process. But the responses were low. This continues to be a challenge. How can we do friction-free assessment?
  8. Partner cultivation. As the testing was going on, we also drafted a partner engagement plan. What organizations would be ideal partners for clubs? What are we offering them and how to we want to engage them? Next quarter we will put this plan into action with a number of wonderful organizations.
  9. Website development. Furthermore, we discussed with testers their needs for an online platform to showcase and connect this initiative. The first version of this new website will go live in April.
  10. Reflect early, reflect often. Throughout this quarter, we had conversations with testers, colleagues and other partners about this process. We constantly adjusted and improved. This is an essential practice. Going forward, I anticipate continual reflection and iteration as we develop clubs collaboratively and in the open. It was very beneficial meeting the team in person for several days of planning. I hope we can do that again, expanding to regional coordinators and testers, next quarter.
  11. Get out of the way. Once the framework is set up and a team is in place to support testing, it’s important to get out of the way! Smart people will innovate and remix the experience. Make sure there are ways to encourage and capture that. But allow beautiful and unexpected things to emerge, like Project Mile.

If you participated in this round of testing, or have related experiences, we’d love to hear your thoughts on the process!

2 comments

  1. Greg McVerry @jgmac1196 · March 13, 2015

    Michelle,

    First of jealous of the fount family. I like the soft edges.

    You ask about friction-free assessment. They don’t exist. We would be chasing a unicorn named oxymoron if we spent too much time looking for friction free assessments,

    As Dan Hickey likes to remind us the introduction of assessments fundamentally alters the motivation of learning. Yet we need to assess learning.

    So I guess we shouldn’t look for friction free assessments but well oiled assessments, and I always thought badging was the major mechanism to ensure evidence of learning was collected.

    I have been fascinated to watch the diverse perspectives towards assessment in Mozilla Learning. You have analytics and design teams who will run a statistical test on a hex color A/B test to increase unique visitors (not sure why) and we support the largest open badging platform.

    The short answer (but not easy) answer, “Choose assessments that align to your learning goals and philosophies.”

    The three open ended questions we asked more measured the mentors expectations and bias of what was learned. The analysis of these questions would also be quite time consuming. For example how does the question, “What are the top three strategies you think will be used by your learners to see if information on a website is credible?” capture growth? Would you count up the frequency of different strategies and run statistical tests to see if the top three strategies changed?

    If you want it the assessment to be fast you have to use likert scales, have enough items, and then treat your ordinal data like numerical data (a step many in measurement disagree with).

    If you want the assessment to be fast and reliable you would have to spend anywhere from 10K to millions to develop measures that act like traditional tests. This could be done for reading and writing, but participation would be hard. Furthermore many, in the connected learning camp would argue that these assessments measure very little (Ian and I do have credibility assessments and UCONN has made their online reading and research assessment available).

    Align assessments strategies to our philosophy and it may become evident that the only metric that matters is the number of makes submitted. Then encourage club mentors to have club members submit evidence for badges.

    You then to ensure, credibility of badges, could audit a random set of submissions and the evidence included with the credentialing. We have to ensure that the badges get external recognition. I don’t think a sampling of badge applications would involve that much more work than the coding and analysis of three open ended questions.

    The problem I see with badges and the curriculum is I, as an issuer, do not feel that the activities lead to a preponderance of evidence that would leave me comfortable with issuing a badge. It may take multiple activities.

    We may need more light weight or level up badges that mentors can give (but only the final web literacy badges go to backpack?).

    Instead of friction-free assessments on a global scale we will need to provide mentors, especially the majority who do not come from education, on the principles of formative assessment. They need to know how to take the learning goal, teach the curriculum that elicits evidence of growth towards that goal, be able to analyze that evidence while they facilitate learning, and then adjust instruction. If you can figure out easy ways to teach these better practices please tell me so I can steal them.

    It is up to us as a community to ensure the badges have value and weight. It is up to us to ensure they are baked into the ecosystem and allow for individual learning pathways.

  2. Pingback: Friction Free Assessment when Tests are the Sandpaper of #TeachTheWeb Learning