(TNS) — Digital learning tools that fit well within existing classrooms and don't disrupt the educational status quo tend to be the most widely adopted, despite their limited impact on student learning, an analysis of ed tech products designed for higher education concludes.
Experts say that pattern is also reflected in K-12, raising tough questions about whether many ed tech vendors' emphasis on quickly bringing their products to scale is actually hampering the larger goal of improving schools.
"There is a lot of research showing that more comprehensive technology interventions tend to have more positive results in both sectors," said Barbara Means, the director of the Center for Technology in Learning at SRI International, the nonprofit research center that conducted the new analysis. "To create an education technology tool that can have an impact, but also be adopted in many classrooms, requires thinking about supports for teachers, resources for instruction, and rethinking the way time is used within schools."
Those conclusions are drawn from a fresh analysis of data SRI gleaned while evaluating the effectiveness and growth curves of 29 digital learning tools funded by the Bill & Melinda Gates Foundation in 2010. The products included complete online courses, peer-support platforms, and predictive analytics tools. Most had no statistically significant impact on student outcomes. But the number of users that each product attracted varied widely, from as few as 181 to as many as 130,000.
The SRI researchers found some evidence that when it comes to ed tech, effectiveness and scale may actually be inversely related: The more effective the tool, the smaller the scale at which it was adopted, and vice versa.
They also identified three common factors among those products that scaled most rapidly: a promise of cost savings for schools, no requirements for face-to-face training, and an ability to be easily integrated into existing teaching and learning practices.
Those traits reflect the dominant Silicon Valley business approach of seeking to quickly gain as many users as possible — a strategy that Means described as particularly ill-suited for schools.
"Equating usage with value is fine for a consumer product that users are spending their own time on," Means said. "But students are not volunteers, and we're devoting instructional time to products when we don't know whether they work or not."
The digital learning tools analyzed by SRI were awarded Next Generation Learning Challenge grants by the Gates Foundation in 2010. The foundation also funded SRI as an independent contractor to track and evaluate the products' progress over the following two years. (The Gates Foundation also helps support Education Week's coverage of the implementation of college- and career-ready standards and the use of personalized learning.)
In 2012, SRI prepared an internal report for the foundation and for EDUCAUSE, a nonprofit that promotes technology use in higher education and hosted the learning-challenge grant competition.
More recently, the researchers revisited the data to examine the relationship between scale and impact. The resulting paper was presented at the annual conference of the American Educational Research Association in Washington last month.
Andrew Calkins, the deputy director of the Next Generation Learning Challenge, said he took SRI's findings to heart. Beginning around 2013, Calkins said, his group switched its focus from supporting ed tech tools to funding schools and universities willing to embrace new organizational models and new approaches to teaching and learning.
"Practitioners [in traditional schools] find it easier to adopt technology tools that readily fit within their existing models," Calkins said. "That's why tools and platforms that demand a lesser degree of disruption might have found greater purchase in the marketplace."
SRI classified the ed tech tools it studied into five categories:
Some of the products sought to achieve scale through a top-down strategy dependent on institutional commitments. Others used a "retail" approach, going directly to university instructors or students.
The tools' impacts on student learning were generally measured using a comparison group, based on the outcome measures (such as assessments, course grades, and course completions) determined most suitable and feasible by the researchers.
On average, SRI found, the products that involved whole-course redesign and required institutional buy-in were the most effective.
The biggest impact, for example, was made by U-Pace, a self-paced introductory psychology course developed at the University of Wisconsin-Milwaukee. The course involved online lessons, embedded quizzes, and a host of supports for faculty members (such as an online training module and templates for providing feedback).
A similar dynamic is at work in K-12, where effective technology implementations often look quite different from successful consumer-product rollouts, said Jean Hammond, a co-founder and partner at LearnLaunch, a Boston-based nonprofit that invests in and supports ed tech companies.
"They're doing organizational-behavior change," Hammond said of the best school-technology initiatives, "rather than just asking if people like [the technology] and use it."
In the years since the original SRI study was conducted, the K-12 market has evolved considerably, said Sara Allan, the deputy director of K-12 programs at the Gates Foundation.
There has also been a general shift away from the kind of comprehensive, all-in-one solution that the SRI researchers found to be most effective. Only a few large, more established companies have the resources and capacity to develop such products, then wait out K-12 schools' glacial purchasing cycles. And some of the higher-profile initiatives, such as the complete K-12 digital curriculum that Pearson sold to the Los Angeles Unified district, turned into major flops.
The result, Allan said, is that educators increasingly look to curate a variety of technology products and services from multiple sources.
On the K-12 side, that presents numerous challenges: Can the various tools and platforms "talk" to each other? Do they reflect the same pedagogical philosophies? How do you ensure consistency from classroom to classroom and school to school?
Fail to resolve those questions, and schools end up with a hodgepodge in which the effectiveness of any one tool is limited by the confusion in the broader ecosystem.
That means new challenges for developers and vendors, too.
For one, it can be quite complicated to measure an individual tool's impact on learning amid such a complex environment.
Many startup companies also struggle to find traction beyond an initial group of highly motivated early adopters, said Hammond of LearnLaunch.
The companies that are often most successful in this new landscape, said Allan of the Gates Foundation, are those that start by going directly to a relatively small cadre of educators; use those relationships to gather feedback, improve their products, and demonstrate demand and effectiveness; and then work with institutional leaders to make sure their tools are also integrated into districts' systemwide instructional models and purchasing plans.
Such a strategy reflects both business and academic realities, Allan said: Companies need to be able to quickly show investors that their products are being used, then follow that up by rapidly showing evidence of positive impact.
It's a tension that Means of SRI hopes both schools and vendors come to grips with soon. The key, she said, is recognizing that any new technology needs to come wrapped in a host of supports if it's going to make a deep, lasting difference inside schools.
"Just because a ton of people use YouTube to learn all kinds of things doesn't mean we should base all 7th grade life-sciences courses on online videos," Means said. "Someone could probably use YouTube to create a great course, but it would take a lot of work to make it fit the needs of teachers and students."
©2016 Education Week (Bethesda, Md.), distributed by Tribune Content Agency, LLC.