Throughout my time at Rocketship, I have been fortunate to both teach in our schools and work in a network support role. Now, as I coach school staff in the implementation of personalized learning, I get to thought partner with educators as they push and grow their practices just as I did in my classroom. When going through cycles of reflection, I always think back to how the approach of Lean Startups can very directly apply to how teachers continually iterate within their classrooms. The methodologies of The Lean Startup, written and codified by Eric Ries, encourage innovators to build in small chunks and rapidly iterate, just as teachers need to with every student every day.
Whether you’re running an official pilot at your school, thinking about how to improve your class lessons or problem solving ongoing engagement with a particular student, these startup concepts apply. Perhaps you wouldn’t actually code anything, but you are building lesson plans, class routines and an effective learning environment for your students.
At the root of every innovation or experiment is a hypothesis. Based on prior knowledge and an identified need, you have a hunch that your idea will make something somehow better. Make sure it’s hyper focused, based on a specific need and will lead you to a clear conclusion.
Let’s say Mr. Smith notices that his 3rd grade students are having trouble mastering their math facts. His class practices and assesses their mastery each week, but he’s not seeing enough growth. Their normal chants and drills just aren’t doing it, and students are feeling too disengaged by the repetitive activity to excel. However, he heard about a great new online tool that seems to be promising with student engagement. His hypothesis: if my students use an online, adaptive program to practice their math facts, then they will be more engaged and master their math facts at a faster rate.
Minimum Viable Products (MVPs)
Instead of spending countless hours developing a roll out plan and routines to accompany, innovators should create a minimal viable product – a barebones product that will allow you to get feedback as soon as possible. This way, a designer figures out what works and what doesn’t work, and begins “tuning the engine” immediately.
In Mr. Smith’s case, he decides to get his students started on the fact fluency program right away. Instead of redesigning his entire class structure, he simply replaces his math fact fluency time with the online tool, spending just enough time reviewing expectations and structures to get students started.
In order to collect actionable data, startups utilize split-tests (otherwise known as A/B tests), applying the hypothesis to one group and maintaining prior conditions with the secondary control group. This allows for direct comparisons between both groups and yields quick, actionable data.
To ensure he can assess efficacy right away, Mr. Smith only tries the online program with one of his classes. Without changing any other component from his class (aside from the math fact fluency portion), he’s able to compare progress from one class to the other. Since both of his classes are similar in student composition and achieve similar academic scores, Mr. Smith will be able to easily see if the program really does improve student engagement.
In an age of innovation, it’s important to not get lost in the wrong measures of growth. Thus, it’s important for all types of innovators to focus on validated learning, measuring progress through user validation. We must focus on actionable metrics that really measure success of the pilot with users, rather than get lost in vanity metrics that falsify success.
When measuring progress in Mr. Smith’s pilot, he chooses to focus on student engagement (his actionable metric), which he knows will likely lead to his ultimate goal of student growth. However, he knows not to initially focus on math fact mastery (a vanity metric), as he knows student engagement is really the core of the mastery issue, and it’s a prerequisite for student achievement.
After a couple of weeks, Mr. Smith surveys students to measure their interest and engagement in both forms of math fact practice. He finds that the class using the online program mastered their facts only slightly faster than the other class, but showed much higher engagement and eagerness to continue practicing their facts.
Altogether, this learning process forms the build-measure-learn methodology, which emphasizes rapid testing and iterating to continually improve. Using the learnings from Mr. Smith’s experiment, he decides to persist with the product instead of pivoting away, implementing the program across all classes. However, he soon finds that the subset of students below grade level are still disengaged. His next hypothesis: if I temporarily reduce the difficulty of online fact practice with disengaged students, then they will feel success and engagement will boost.
At the end of the day, this process isn’t actually revolutionary. People have been problem solving and innovating for years. However, this framework gives us common language to share our progress and allow for innovation to be rapid.
Stephen is a part of Rocketship’s Achievement Team, overseeing the implementation of online learning and driving the personalized learning initiatives throughout the network. Previously, Stephen was a 5th grade Integrated Math teacher at Sí Se Puede Academy, where he first began his work leveraging personalized learning strategies to improve outcomes for all students. Stephen grew up in Southern California and attended the University of California, Los Angeles where he studied biophysics. After graduating, he joined Teach for America and began his career at Rocketship. Stephen now lives in San Francisco and enjoys discussing all things technology, trying all types of foods, traveling and spending time outdoors with friends.
Follow Stephen on Twitter: @stephenqpham