Back in 2000, Jeremy Roschelle and colleagues published this review of research to date into computer-based technology in schools. While comprehensive, the article skews heavily toward considering more progressive applications, and away from ways that computers were being used to support repetitive drills and other more traditional pedagogic approaches.
They note that research results have been inconclusive for a number of reasons;
- wide variety of specific pieces of hardware and software being used
- blurring of the effect of technology and other concurrent reforms within schools
- lack of decent longitudinal studies.
Following from the title of the paper, the authors look first at how computer use in schools relates to a number of findings from cognitive research into learning (the 'how'), and then provide a brief review of the application of computers to a number of different subject domains (the 'what').
In terms of cognition, they assert (drawing on examples) that computers can provide excellent learning support as suitable software applications can create learning environments that;
- actively engage the student
- facilitate group participation
- provide interaction and rapid feedback
- are related to real-world contexts
In terms of subject matter, they discuss applications in
- science (primarily interactive modelling and visualisation tools)
- mathematics (again, dynamic interfaces for visualising abstract ideas)
- 'social studies, language and the arts' (briefly mentioning applications that move students from being appreciators to creators)
The authors identify developing teachers' skills, revisiting assessment, modernising curricula, and creating cultures of change as critical challenges to introducing these types of technologies into schools.
What I found most intriguing about the paper was the repeated assertion that various applications had dramatically improved students abilities to comprehend concepts and their application, but had limited impact on more traditional test scores. In one typical case students 'scored about the same on standardised math tests, but showed significant improvement in their ability to solve complex problems'. Beyond the initial question of 'Can our tests really be that crap?' This raised several thoughts for me;
if certain applications can improve conceptual understanding and application, but not fact recall and rote procedure, then clearly we need to think clearly about the two as different domains of knowledge
if we think about the limited time available for learning, then focusing time on 'conceptual learning' will necessarily impact our 'fact recall' achievement
this has probably been one of the key reasons that institutions have not pushed harder in this area, as it was not being suitably measured by their KPIs
if we take it that conceptual knowledge is more valuable than rote knowledge, any institution that starts to adopt this type of teaching (including but not limited to this kind of technology use in class) will suffer apparently poor performance by widely established metrics
an institution trying these methods will need to convince parents that test scores are not the primary objective, and will ideally have to work with those institutions that students will move on to (high schools or universities etc.) in order to give them an understanding as to how the true merit of the students should be judged
The issue was raised eloquently by Postman and Weingartner in their strident 'Teaching as a Subversive Activity', when they acknowledge the impact that the critical analytic approach they take to education will have on students' standardised test scores.
Beyond test scores and implementational challenges, Roschelle and his co-authors struck a chord with me when they point out that, at the end of the day, '[t]ime spent preparing students to do well on numerical calculation tests, vocabulary, or English mechanics cannot be spent on learning about acceleration, the mathematics of change, or the structure of Shakespeare's plays.'
On a side note, the description of the ThinkerTools application intrigued me enough to go see what they have been up to over the past decade. Looks like US Berkeley continued tinkering with their tools, but have been very quiet since about 2006 sadly, and no downloadable applications are available.
Another area I am keen to explore (and a big hello to you, Ben), is the use of simulations of complex systems to teach concepts without the underlying maths (p.87 of the paper). In particular, the paper by Resnick, 'Turtles, termites and traffic jams: Explorations in massively parallel microworlds' sounds intriguing.