Tuesday, August 6th 2024 Digital Learning, Research

New study explores what makes digital learning products more – or less – effective

Research led by Stanford Professor Rebecca Silverman analyzes studies on edtech interventions for early reading skills.

by Carrie Spector

A new study finds that the effectiveness of edtech products for early literacy varies considerably, depending on particular features of the interventions and the skills they target. Photo: Shutterstock

share

Educational technology has become a fixture in the U.S. classroom, but scholars continue to debate its effectiveness – some even arguing that the products might deter learning by taking students’ time and attention away from more powerful supports. 

What does research show about the effectiveness of edtech? Does the impact vary when it comes to teaching certain skills and student populations? How can schools determine which products are most useful for their own setting and purposes? 

A new Stanford-led study sheds light on the value of edtech interventions, with a focus on products aimed at helping elementary school students develop early reading skills. In a meta-analysis of studies conducted over the past two decades, the researchers found that the effectiveness of tech products varied considerably, depending on particular features of the interventions and the skills they targeted. 

“When we talk about digital learning products, they’re really not all the same – there’s a wide range,” said Rebecca Silverman, the Judy Koch Professor of Education at Stanford Graduate School of Education (GSE), a faculty affiliate of the Stanford Accelerator for Learning, and the study’s lead author. “There isn’t a single answer to whether digital technologies support literacy. The question is much more complex: Which products, with which characteristics, under which conditions?” 

The paper, published July 31 in the peer-reviewed journal Review of Educational Research, was co-authored by Elena Darling-Hammond, a doctoral student at the GSE; Kristin Keane, a postdoctoral scholar at the GSE; and Saurabh Khanna, PhD ’23, who is now an assistant professor at the University of Amsterdam. 

Rebecca Silverman
Stanford Accelerator for Learning Faculty Affiliate Rebecca Silverman. Photo: Ryan Zhang

Accounting for variability

For the meta-analysis, the researchers drew on 119 studies published between 2010 and 2023 to examine the use of various digital interventions in kindergarten through fifth grade, including computer programs, e-books, online games, and videos. 

The study is unique, they said, in its focus on edtech at the elementary school level and its review of interventions across four skills: decoding (the ability to read words quickly and accurately), language comprehension (understanding the meaning of words), reading comprehension (processing the meaning of a passage), and writing proficiency (the ability to convey ideas in writing).

Their analysis found positive effects on elementary school students’ reading skills overall, indicating that generally, investing in educational technology to support literacy is warranted. But when the researchers isolated particular learning outcomes to measure effectiveness, they found wide variability, suggesting that the effectiveness of a particular edtech product can depend on different factors, including features of the tool and characteristics of the users.

The authors observed that most studies – and the majority of products in the marketplace – focused on basic decoding, where students use phonetic skills to understand the relationship between written letters and their sounds. Relatively few studies considered language and reading comprehension, and only a handful looked at writing proficiency. 

“Decoding is a fairly constrained construct involving a relatively circumscribed set of skills,” Silverman said. “There are only so many letters and sounds and letter-sound combinations that kids need to learn, so it’s generally easier to teach and see change over time.”

Language comprehension is a more complex construct, she said, involving a vast number of concepts, word meanings, and sentence constructions and the ability to make connections and build knowledge. “Its complexity makes it harder to teach and see progress. But it’s a crucial skill to be able to access texts and content, so we need more tools and research focused on that piece.” 

Product features that appeared to account for some of the variability in effectiveness included the type of technology, the duration of the intervention, and the instructional approach (that is, whether it emphasized repetition and facts, strategies to organize and process information, or open-ended tasks). 

The analysis found, for example, that certain personalization, gamification, and interactive feedback features, like pop-up questions and clickable definitions, were not effective for supporting more complex skills like reading comprehension.  

Where student characteristics were concerned, socioeconomic status surfaced as one factor moderating effectiveness: With decoding as an outcome, for example, studies with a substantial percentage of students from low socioeconomic backgrounds tended to have larger effects compared with other studies, which Silverman said could be due to the programs they used being more geared toward their needs. 

The researchers suspected that disability and language status would also emerge as a factor in the variability they uncovered, but few studies disaggregated findings based on these backgrounds. 

“A program might not benefit some kids as much as others, and if we don’t track that in a systematic way, we’re not going to know,” Silverman said. “Right now, it’s not being systematically captured in the research, and that’s a problem.” 

The researchers also noted that few studies addressed edtech’s impact on students’ motivation or engagement, and few included follow-up over time, to assess whether the effects lasted months or even years after the intervention. 

Considerations for school leaders

The findings point to several directions for educators and policymakers, the researchers concluded. For one thing, Silverman said, districts contemplating a particular product should carefully consider whether it’s appropriate for their population of students, and whether the content and approach aligns with the curriculum and classroom teaching. 

She advised that, rather than taking marketing claims at face value, districts conduct a critical analysis of any program before deciding whether to adopt it for their schools. “Is it following the principles of effective practice for the skills you’re targeting with that program?” she said. “What studies have been done on it? How strong is the company’s own research? Has anybody done any independent research?”

Districts can also generate their own data, for example, by running a pilot program in which some schools or classrooms implement an edtech intervention, comparing their outcomes against the schools that don’t. “You may not be able to isolate [the effects of the program] completely,” Silverman said, “but an analysis can suggest whether this product is helpful.”

If a product doesn't appear to produce positive effects, districts can partner with researchers to try to figure out why — or they can move on to trying other tools and evaluate those, she said. “We don’t want kids to keep using products that aren’t helpful.”