Yesterday, during a lecture at KU Leuven on complexity & innovation in public sector IT projects, I received an interesting question: how do you measure the quality of eLearning? *)
How do you measure eLearning and education?
It is a challenging question. The evaluation of education is a serious painful topic, with huge implications: the resource allocation and funding of all the corporate training programmes in the world, but also of the entire public school and university system: what do you fund, how do you measure the impact, how do you measure the performance of professors or even schools, of manuals, of any kind of tools including eLearning?
My initial instinct was to avoid the answer, by deferring the question to experts in pedagogy, because there is a lot of theoretical research on this topic, and a lot of failure and limitations on the results of this research. Measuring the output of education is very difficult – it costs, it is unreliable. Education faces in this regard the same problems as most social sciences, including management and project management. It is almost impossible to create identical control groups, or to perform a double-blind study, with nearly identical students and professors, with identical conditions except for the criterion that we want to evaluate. Therefore, the results of such evaluations are polluted by large amounts of external factors, impossible to isolate, and with unknown impact – potentially butterfly effect.
Factors that impact education outcome include differences in student, teaching and learning styles, quality of last night’s sleep, problems with parents or boyfriends, professors’ experience, manuals, font size and colour, curricula, time and season, language and cultural bias, weather, passion and, of course, the level of education of the parents – which remains for the moment the best-known predictor of academic performance.
Then we started, during the lecture of yesterday, to talk about pedagogical design methodologies such as ADDIE or SAM, and how they incorporate assessment and evaluation. And while talking about these theoretical approaches, I realized that we actually know this stuff, we do this, we face these type of questions frequently, and we find ways to work around them. Organizations are always asking why they should invest in eLearning, how can we know that it works, how effective it is.
From the pure cost of view, it is quite simple. eLearning is less expensive.
From the quality point of view, it is a grey area. Assessment can be included in course design. Yes we can measure the results of a training programme against initially defined pedagogical objectives. We can even design, at least theoretically, double-blind evaluations. But the practical problems make such an evaluation almost impossible for individual courses.
Academic vs. practical problems
Only academic problems have simple clear answers, such as”34%”, or ”white”. The type of experiments that start with: “considering a perfectly spheric duck, in zero-gravity conditions...”. Practical problems such as ”how should we manage conflicting stakeholder objectives”, “should I hire business-experts in my software development team”, or ”how to manage a complex IT project”, do not get simple answers, but rather sets of guidelines and tools (by the way, these were also great questions and topics from yesterday’s lecture). We cannot solve these problems with traditional systems of equations. But we can apply systematic methods of research, analysis and design, and try to create organized systems of questions, issues, ideas and correlations, that are more or less efficient or appropriate. And then we design and test solutions.
For instance, there is no golden bullet on how to solve stakeholder conflicting objectives. But we should certainly start from applying basic stakeholder management, which is fairly straight-forward and is the basic initial step: list your stakeholders, their contact details, and their role. Maybe add their interests and objectives. And then we can move to scope and requirements management: listing somewhere the needs of each stakeholder and organizing and prioritizing such specifications in dependency matrices. We didn’t solve the conflicting objectives yet: but now we have a framework that helps us to understand and manage them, so now we can consider options and guidelines to apply to individual conflicts - from having coffees (clearly the best business tool in the world), to organizing meetings, questionnaires or going to court.
And, btw, it is not the IT or physical tool that makes the major difference. Lots of years ago, not satisfied with my MS Project, I asked what is the best project management and planning tool, expecting answers such as Primavera or Jira. To my surprise, my professor, a retired British colonel, project management and IT practitioner, answered MS Excel. Since then, I met hundreds of professionals and I tested lots of tools: the conclusion is still the same. The fundamental project management tool remains the spreadsheet.
This is how we solve practical real-world problems:
- We analyze and understand the problem, with initially qualitative and then maybe quantitative research, analyzing cases and real-world situations **).
- We propose answers and ideas, using tools such as design science, innovation methodologies and brainstorming. We design tools and potential solutions ***).
- We test the tools in simulated and real case-studies.
- We throw away the solutions and tools that didn’t work. And start over from step 1, improving the tools that work.
So how do we measure (education) quality?
The most practical and efficient assessment tool I know is stakeholder feedback. Simple, subjective feedback forms, questionnaires, checklists. Asking students how much they liked a course, what they liked, what they didn’t like, what they learned, and what they wished to learn.
Yes, the feedback is subjective, and it must be examined critically. We cannot reasonably extrapolate statistically relevant results from a small number of questionnaires. The analysis of the answers must be done manually. Limitations must be applied, validity and reliability of the results must be assessed. But it is always possible to extract valid, useful conclusions. If hundreds of students or other stakeholders declare that something is working: then it probably is. If stakeholders unanimously declare that a project failed or a tool or system is not working, then we can reasonably conclude that we should throw it to the garbage; understand the causes but discard the results. If the opinions of students, of stakeholders, are split and divergent: then we can still analyze their arguments and perform root-cause analysis. We can try to understand the negative feedback, we can thus improve the solution design.
What is your opinion?
*) The presentation is here.
**) see also: E. Gummesson, Qualitative Methods in Management Research, London: Sage, 2000;
R. K. Yin, Qualitative research from start to finish, New York: The Guilford, 2011.
***) see also: R. J. Wieringa, Design Science Methodology for Information Systems and Software Engineering, Berlin, Heidelberg: Springer Berlin Heidelberg, 2014;
K. Peffers, T. Tuunanen, M. Rothenberger and S. Chatterjee, "A design science research methodology for information systems research," Journal of Management Information Systems, pp. 45-77, 2007.
No comments:
Post a Comment