Currently reading

I want to read more research. To hold myself accountable, starting September 2020, I will be making brief notes of the research I’ve read, e.g. what I liked about the paper/book and what I learned from it, and some of the questions that arise. Papers/books I loved will be indicated with an *

01/
03/
21
Henrich (2020). The WEIRDEST people in the world.Incredibly comprehensive work on how psychology and culture of “the west” was shaped. Spiritual successor to Diamonds “Guns, Germs, & Steel”, providing missing pieces of explanation how we, psychologically, came to be as we are today. Truly eye-opening to understand how peculiar and extreme we are compared to history and other regions of the world.
07/
01/
21
Pearl (2019). The seven tools of causal inference, with reflections on machine learning. Communications of the ACMEasy and brief introduction to Pearl’s Ladder of Causation and his general argument that there are principled roadblocks on the way to artificial intelligence with robust (that is, generalizable) reasoning abilities.
18/
12/
20
Elwert & Winship (2014). Endogenous Selection Bias. Ann Rev Soc.Great and comprehensive overview over issues of selection bias in causal inference. A bit dry and abstract at points, some more practical examples would’ve helped greatly. Although is written in the context of Sociology, it readily applies to Educational Technology and related fields.
08/
12/
20
Jivet et al., (2018). License to evaluate. Preparing Learning Analytics Dashboards
for Educational Practice.
LAK18
Provides a review of the research on LA dashboards. Shows that the intended goals of research do not always map to the actual evaluation. Gives a nice overview of which competences are usually targeted and the data gathering methods to evaluate them. Also gives a brief overview of the effects of different frames of reference in dashboard feedback. Provides nice references to assess typical methodologies and analyses in this sub-literature.
08/
12/
20
Viberg et al., (2018). The current landscape of learning analytics in higher education. Computers in Human BehaviorAnother systematic review looking at the evidence base of LA in HE. In line with Dawson et al., (2018), the review shows that prediction is a key focus of LA. Also, the substantive use of theory in LA research is relatively low. Uses the Ferguson & Claw propositions to assess LA evidence and shows that evidence of improved learning outcomes is still low (but rising), evidence of LA being taken up widely is consistently low, as is the ethical use. Support of teaching and learning is, however, widely represented.
08/
12/
20
Mangaroska & Giannakos (2018). Learning analytics for learning design: A systematic literature review of analytics-driven design to enhance learning. IEEE TLTAuthors review the important nexus of LA and LD. Although LA shows great promise in modeling student success/learning, research should not stop there. Instead, to actually fulfil the promise of enhancing learning experiences, a connection to LD is needed. IMO, this systematic review shows that research at this nexus is still relatively limited, much of LA research is not grounded in pedagogical models. “Closing the loop” may not be enough, what happens next is equally important and may not be addressed by LA alone. Theoretically justified design decisions based on data-driven analytics needs *theory*.
08/
12/
20
Sonderlund et al., (2018). The efficacy of learning analytics interventions in higher
education: A systematic review.
BJET
Systematic review of LA interventions in HE. Demonstrates that, according to QATQS, LA intervention study are of predominantly “moderate” quality. Apparently, aside from the data-driven analysis of student learning (e.g. student at risk of dropping out), the actual intervening based on this (e.g. approaches to alerting students of this) it not yet evidence-based. Also, putting the onus of behavior change solely on the student may be an issue, when other approaches may also be available (e.g. modifying learning experience). In a way, this review seems to have a limited view of LA interventions, as targeting e.g. learning strategies, SRL, CSCL performance, motivation etc. are not considered here, despite them being important DV’s as well.
08/
12/
20
Dawson et al., (2019). Increasing the Impact of Learning Analytics. LAK19Reviews research from LAK and JLA from 2011-2018 along five dimensions (e.g. focus, purpose, scope etc.). Shows that much of LA research is concerned with exploratory research (build & predict) on a relatively small scale, although a path toward maturity as research field should be leading us to research of evaluating large-scale implementations, increasing our understanding of learning and feeding back into theories and models. The authors specifically challenge the community to address the complexity inherent to practical application (e.g. socio-technical systems). IMO, this might be a contradiction because building a solid knowledge base requires rigorous research methods and a reduction of complexity, whereas later, practical application should situate this knowledge in light of practical complexity.
08/
12/
20
Bodily & Verbert (2017). Review of Research on Student-Facing […] Systems. IEEE TLTNice systematic review of research on student-facing systems. Points to some notable conceptual and methodological limitations of the literature.
07/
12/
20
Ifenthaler & Drachsler (2020).
Learning Analytics
Handbuch Bildungstechnologien
An easily digestible brief overview of the field of Learning Analytics, with an eye for the developments in Germany and German institutions.
4/
12/
20
Miyamoto et al., (2015). Beyond Time-on-Task: The Relationship Between Spaced Study and Certification in MOOCs. JLAAuthors do an impressive job of analyzing a huge dataset to assess the effect of spaced learning in MOOC’s on completion. They take several measures to account for confounding and analyze the data in several complementary ways to arrive at cautionary causal inferences. The theory is a bit thin and there is a theoretical chasm between the spacing effect of CogPsych and what is done here. The commentary to this articles does a good job of illuminating this. Methodologically, this seems mostly sound, although I’m not sure I understood every step. Also, the last analysis which appears to be the strongest causal argument, may have conditioned on a collider by selecting those students that have enrolled in more than once course.
03/
12/
20
Khosravi et al., (2019)
RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities. JLA
Authors introduce the RIPPLE tool for recommending learning activities. Importantly, content can be crowd-sourced, thus, circumnavigating the issue of high time demands for creating adaptive content. For right now, only rather limited learning activities seem possible (MC, worked examples) but the tool seems reasonably easy to use and implement. Learner model and recommendations are based on an ELO rating system. Authors use propensity score matching to counteract selection bias in their assessment of RIPPLE efficacy. However, aside from GPA, I’m not sure that they chose relevant propensities for matching. I think the hidden confounders are not addressed with these covariates. Further subjective ratings are nice-to-have but not exactly rigorous, e.g. assessing relevant latent variables etc.
02/
12/
20
Ferguson & Clow (2017). Where is the evidence? A call to action for learning analytics. LAK17Introduces the Learning Analytics Evidence Hub and the four central propositions or quality indicators. Makes the connection to Medicine and Psychology where a lacking evidence bases has been increasingly recognized and counter-measures are taken. Reports a lack of hard evidence in the evidence hub and shows that even with seemingly strong findings, we should be cautious of the evidence. This is exemplified via the Course Signals study by Arnold & Pistilli (2012), where issues of confounding (+ other issues) appear to explain the seemingly strong effects. Cites some interesting blog posts by Caulfield and Clow on the topic.
01/
12/
20
Rohrer & Lucas (2020). Causal Effects of Well-Being on Health: It’s complicated. PsyArxivTalks about the difficulties of establishing a causal estimate of well-being on health, despite the wealth of studies claiming these effects. The main issue seems to be that there are many possible confounders, not all of which can be accounted for. Moreover, well-being can not be experimentally manipulated without also manipulation other potential causes. Nice insights into the general issues that surround causal claims.
30/
11/
20
Gasevic et al., (2016). Learning analytics should not promote one size fits all. Internet and Higher Education.LMS trace data as well as students characteristics are collected from different courses to predict final grade and course pass/fail. Results show that predictive power of these independent variables vary widely across the different courses (e.g. math, biology, graphic design etc.), demonstrating that failure to account for these differences will lead to misinterpretation, especially when only the general full-sized sample is used for inference. However, this study may fall prey to Simpson’s paradox as data is collected from at-risk students. Thereby, this study may have conditioned on a collider, producing biased estimates.
23/
11/
20
Bligh & Lee (2020). Studies in Technology Enhanced Learning: A project of scholarly conversation. 
Studies in Technology Enhanced Learning
In this editorial, Bligh & Lee lay out the motivation for launching STEL and talk about the niche it will try to fill.
18/
11/
20
Bardach et al. (2020). Studying classroom climate effects in the context of multi-level structural equation modeling. International Journal of Research & Method in EducationThe authors argue that certain constructs in educational research are best understood at the classroom level, for example, classroom climate (i.e. achievement goals structures & social goal structures). Not appropriately accounting for this nested nature of the construct arguably answers different questions and may lead to biased estimates. They demonstrate this by comparing doubly latent multi-level structural equation modeling with SEM with cluster-robust standard errors, whereas the latter yielded inflated estimates of effects.
03/
11/
20
Scheiter et al. (2017). How to Design Adaptive Information Environments to Support Self-Regulated Learning with Multimedia. Informational EnvironmentsAn overview of research into adaptive systems with respect to multimedia learning. Highlights the challenges and assumptions embedded in adapting instruction to learners self-regulation. With this being only one of many possible learner variables to adapt for, well-functioning adaptive learning systems may still be quite far away.
29/
10/
20
Buder et al. (2017). The Role of Cognitive Conflicts in Informational Environments: Conflicting Evidence from the Learning Sciences and Social Psychology?
Informational Environments
Nice summary of research on cognitive conflicts in education as well as psychology. Although there is an attempt at integrating these lines of research and a plausible explanation for conflicting evidence, the actual integration remains brief and a bit superficial. Overall really nice writing. Some nice exemplary references about research designs into cognitive conflict and behavior in social/digital media.
29/
10/
20
Hillmert et al. (2017). Informational Environments and College Student Dropout. Informational EnvironmentsInvestigates many different factors in relation to dropout intentions as well as actual dropout from a German university. Notable differences between intentions and behavior. Among predictors for actual dropout is social integration, fairness of assessments, and online-activity. Predictors are similar for different subject groups as well as performance levels of students.
28/
10/
20
Buder & Hesse (2017). Informational Environments: Cognitive, Motivational-Affective, and Social-Interactive Forays into the Digital Transformation. Informational EnvironmentsBrief introduction to the collection of articles in Informational Environments. Makes the consequential distinction of cognition, motivation/affect, and social-interaction with respect to knowledge creating in informational environments. Further distinguishes “effects of use” from “effective designs”.
20/
10/
20
Sanchez et al. (2020). Does temporal distance influence abstraction? A large pre-registered experiment. Preprint.Large preregistered experiment testing a central tenet of construal level theory, temporal distance (tomorrow vs one year from now) influencing abstractness of representation. As expected, somewhat smaller effect size than earlier meta-analysis.
Good literature on CLT.
12/
10/
20
Riemer & Schrader (2020). Mental Model Development in Multimedia Learning: Interrelated Effects of Emotions and Self-Monitoring. Frontiers in PsychologyAmbitious and complex research design to tease apart learning-related emotions, self-monitoring and mental model development in serious games. Cool: Learning-related emotions measured at several timepoints and analyzed via sequential PLS-SEM. Also cool: Mental model development measured through convergence of student mental models and expert mental models at two time points.
Results are a bit messy but show complex temporal interrelations between emotions, self-monitoring and learning.
10/
10/
20
Landers (2014). An Empirical Test of the Theory of Gamified Learning: The Effect of Leaderboards on Time-on-Task and Academic Performance. Simulation & GamingA brief test of the mediating effect between gamification–>behavior/attitude–>learning outcome. The chosen gamification element is “leaderboard”. The experiment is single-blind, (students don’t know that there is another condition), and behavior is measured error-free via wiki-edits. Mediating effect is not reported sufficiently, we don’t learn about the strength of the effect. All in all, study design is strong but analysis is a bit weak.
10/10/
20
*Landers (2014). Developing a Theory of Gamified Learning: Linking Serious Games and Gamification of Learning. Simulation & GamingProposes a theory of how gamification affects learning:
1. Mediation of game characteristics –> behavior/attitude–>learning outcomes
2. Moderation of behavior/attitude (affected by game elements) moderating instruction–>learning outcomes
Criticizes the state of the literature by confounding serious games with gamification and the resulting “construct proliferation” which impedes progress
Criticizes the practice of assessing the efficacy of “gamification bundles” without distinguishing effects of single game elements. This practice should only be sensible if: 1. These bundles occur frequently in practice and are, thus, practically interesting 2. These bundles are predicted to interact meaningfully, as suggested by theory.
02/
10/
20
Sailer & Homner (2020). The Gamification of Learning: a Meta-analysis.
Educational Psychology Review.
Summarizes effects of Gam. on cognitive, motivational, and behavioral learning outcomes. Subsplit analysis with highly rigorous research designs.
However, what we can take from this MA seems limited as the actual effects of specific G-elements are not distinguished. We learn only about the effects of “gamification bundles” as researchers decided to implement them. Learning little about active ingredients. Looking at different outcome types as if they were on the same level instead of hierarchical also seems a bit “basic”. But: nice moderators!
28/
09/
20
Andel et al. (2020). Do social features help in video-centric online learning platforms? A social presence perspective. Computers in Human BehaviorStudy 1. Experiment (2 groups): asynchronous video annotations vs none –> DV: Social presence (d >1). Nice idea: Exp group has consistent “fake” annotations, originally provided by grad students.
Study 2. Social presence relationship with satisfaction/perceived learning moderated by Extraversion and Conscientiousness.
Only other example that investigates individual difference with respect to SP. However, this study does not tell us about antecedents of SP itself, although this was the focus of the first study. Result: Extr. –> no moderation, Consc. –> moderation
Note: Example of moderated regression procedure
Note: Simple slopes (Aiken & West, 1991)
22/
09/
20
Feldon et al. (2019). Cognitive Load as Motivational Cost. Educational Psychology ReviewAttempts bridging CLT with the cost of EVCT (expectancy-value-cost-theory) to explain motivational processes in instruction. Good review of EVT and EVCT.
Didn’t fully understand the proposition that CL can be seen as motivational cost.
19/
09/
20
*Chambliss (1989). The Mundanity of Excellence: An Ethnographic Report on Stratification and Olympic Swimmers.
Sociological Theory
Great, flamboyant writing: “One hundred and twenty hands should have
touched, one hundred and nineteen did touch, and this made Schubert angry. He pays attention to details.” (p.73). Fascinating insight into characteristics of different levels of performance in swimmers. Stratification is such that these are not mere quantitative differences, instead they operate in different worlds with surprisingly little movement between them. Excellence is mundane in that it is carried by (often) small changes in habit and that is is a regular experience for superlative performers.
(Talent as a concept is worthless?)
16/
09/
20
Dang et al. (2020). Why Are Self-Report and Behavioral Measures Weakly Correlated? Trends Cogn. Sci.Differences between behavioral measures that attempt to maximize within-person variance (e.g. Stroop task: everybody experiences it) and self-report measures that work through between-person variance (trait self-control: people differ highly). High-within person variance yields low reliability versus low-within variance which yields high reliability. Reliability is the upper bound for possible correlation, thus weak correlations between behavioral and self-report measures.
Implication I draw from this: Validity is not necessarily lacking if self-report and behavior don’t converge. Instead, it could be due to low reliability behavior markers.
14/
09/
20
Motz et al. (2018). Embedding Experiments: Staking Causal Inference in Authentic Educational Contexts.
Journal of Learning Analytics
Despite the preference of Learning Analytics for observational (“second-hand”) data, the field should consider doing more experiments, specifically embedded (in-vivo) experiments so the causal question of “what brings learning?” may be better answered (causality from observational data, e.g. DAG’s etc is only mentioned in passing).
–> “minimal contrastive approach” vs “Dream Team” vs “pragmatic trials” (p.50f).
–> good examples of experiments in cog/psych/LA
11/
09/
20
Bulfin et al. (2014). Methodological capacity within the field of “educational technology” research: an initial investigation.
British Journal of Educational Technology
“Active” educational technology researchers sampled, asked to indicate proficiency in different research methods. IMO, results are not that telling, as these methods are extremely diverse and by sampling from all researchers (applied, lab-setting, quant, qual, etc.), you will naturally find “missing” proficiency. Why should people be proficient in every method? (some interesting references regarding Edtech critique)
09/
09/
20
Borsboom et al. (2020). Theory Construction Methodology: A practical framework for theory formation in psychology.
Preprint
Criticizes the monopoly of the hypothetico-deductive mode of theory testing.
Describes steps for systematic theory formation and exemplifies it through mutualism theory. –> Non-empirical approach to theory generation?
Highlights the importance of formal models that allow for mathematical analysis and simulation empirical phenomena lead to proto-theory through abduction. Abduction is not sufficiently explained here, though.
07/
09/
20
Grosz et al. (2020). The taboo of explicit causal inference in nonexperimental psychology. Preprint.Talks about the perils of not making explicit that the goal of research is mostly causal explanation. Observational research should therefore not semantically weasel around causal claims, making it almost unfalsifiable, but instead embrace the challenge by taking appropriate measures like regression-discontinuity and DAG’s.
(good references on causal stuff)