May Digest

Przybylski & Weinstein, 2017: A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital-Screen Use and the Mental Well-Being of Adolescents

Screen time is a big topic and there’s a lot of fear-mongering surrounding it. This large-scale study (n=120.000!) confirms what many of us probably already intuit; that screen time is not per se bad for children and adolescents. Indeed, the relationship to well-being seems to be non-linear and nuanced. Non-linear, in that screen time up until a certain threshold actually shows a positive relationship with well-being before it turns sour. Nuanced, in that these thresholds differ for activities (TV, games, etc.) as well as weekday vs. weekend. For example, the amount of smartphone screen time were things turn bad is around 2 hours on weekdays and around 4 hours on the weekend. Confirming the goldilocks notion, screen time below these thresholds is positively associated with well-being. This ties in nicely with recent research of Amy Orben showing small and similarly nuanced effects. She talks about it on the Everything Hertz podcast and you can find her recent studies here, here, and here.

Ellis & Goodyear 2016: Models of learning space: integrating research on space, place and learning in higher education.

Space is a popular trope in education research, but what does it actually mean? This big ol’ literature review talks about learning spaces, physical or virtual, and how they are connected with learning activities. They find a fragmented literature that has been slow in progress. The authors put in a lot of work in trying to connect the pieces, with, in my opinion, moderate success. Despite their efforts, after reading the paper, I found it to be a bit unwieldy and, in a way, I’m still confused as to how all the pieces fit together. Generally, I find the subject fascinating because I’m convinced that behavior (as in student learning activities) is to a degree influenced by how an environment is designed (as in features, look & feel, and implicit learning theories). Although long standing in areas like architecture or ecological psychology, these ideas haven’t really carried over much into education research or educational technology. Exceptions are, for example, Karel Kreijns’ work. Unfortunately, some of the more recent investigations of affordances in the realm of educational technology miss the point, as pointed out by Aagaard (2018).

Slate Star Codex, 2019: 5-HTTLPR: A POINTED REVIEW

“This isn’t a research paper. This is a massacre.” Now if that doesn’t pique your curiosity, what does? In this blog post, SSC’s Scott Alexander first quickly reviews research on a gene called 5-HTTLPR and its role in the depression symptoms. There are literally hundreds of studies showing effects and several meta-analyses confirming these findings. But of course, as we’ve learned from psychology (e.g. ego-depletion and social priming) that doesn’t necessarily mean much. Whole fields of research with theories formulated, models estimated, and careers built can amount to nothing as long as publication bias and small samples are present. Alexander then goes to tell us about this new monster of a study with preregistered protocol testing most of the literature’s hypotheses at once (!) and, surprise, found nothing. It’s all fun and games until large sample sizes and preregistration come along and ruin everything.

Funder & Ozer, 2019: Evaluating Effect Size in Psychological Research: Sense and Nonsense

This is my favorite of this month. You ever wonder what your effect sizes actually mean? And I don’t mean looking at a regression lines or two slightly offset bell curves. How do effect sizes translate to lived experience? The authors in this article attempt to calibrate our intuitions with respect to effect sizes. Today, hardly any quantitative papers can do without reporting effect sizes, but often enough our interpretation is limited to Cohen’s (1988) classic distinction of small, medium, and large effects. As Cohen said himself (and I think we all know), this really doesn’t cut it. Without a frame of reference, these benchmarks are useless. Funder & Ozer do a good job at guiding our intuitions to better grasp the implications of seemingly small effects. For example, when they tell us that taking the relieving effect of ibuprofen on pain is r = .14 or when they explain how a tiny effect of r = .05 of agreeableness on successful interactions can have huge implications in the long run, simply because we interact with others so frequently. After some more great examples, they conclude with some recommendations for how to get to better effect sizes benchmarks. Read this one!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s