Dede, C. (2011). Developing a research agenda for educational games and simulations. Computer games and instruction, pp. 233-250. Charlotte, NC: Information Age Publishing.
In this blog article, the author describes and her five personal assumptions relating to research agendas for educational gaming and simulations (usable knowledge, collective research, what works, treatment effects, and scalability). She notes the importance of these underling beliefs in formulating a research plan and making decisions. Then, she provides a detailed explanation of each assumption, how its actual practice is conflicting to the existing literature and research agendas of that time, and how the assumptions would benefit educational gaming research and practice. For example, concerning the assumption “what works”, Dede notes that a research agenda for educational gaming needs to look at what works in specific scenarios in real practice versus just determining if a game or simulation is effective in a broad and general sense. The author cites references that support the great variability and individuality in learning and stresses that instead of a “one-size fits all” approach, research in this area needs to utilize multiple perspectives and pedagogies to see how an instructional medium can best fit certain groups and learning conditions.
In sharing his personal assumptions, Dede offers a useful criticism of current research agendas for educational gaming and simulations. Instead of just describing the shortcomings of the research plans and literature that currently existed in this area, he shares alternatives for how these research agendas can be specifically improved in order to be most useful in practice and theory. These assumptions could help to make educational gaming research more useful in actual application in the classroom.
This article connects to my current research interests in its practical suggestions for research agendas that I think could be applied to many fields of educational research. For example, Dede discusses the idea of treatment effects and the fact that a great deal of research focuses on if a significant difference exists between an educational intervention and what is considered to be standard practice. This is followed by the fact that many studies find “no significant difference” between the two. I appreciate that Dede shares specific research design flaws that might contribute to this such as an intervention period that is too short or a sample size that is too small. As I look to research the area of the effects of assistive technology, I would like to keep these ideas in mind as a great deal of special education research examines effectiveness of interventions.