All MOOCs are born equal in dignity, but not in much else
Having completed or being well on my way to complete about 15 MOOCs, and tried a handful more, it's evident to me that no two MOOCs are the same; which is right and proper, the organization of the course being, after all, down to the professor's preferences − and also of course subject to the specificities of the subject matter.
All MOOCs are not created equal, then, from a purely objective point of view. Subjectively, they are not all the same either. Some MOOCs make a strong impression, others are bound to just fade away, others still are fated to be dropped. It is therefore a valid question to ask what makes a good MOOC.
The ideal analytical method
Scientifically, the correct way to answer that question would be to describe each MOOC with a fair number of objective characteristics (we'll come to that), then ask students their impressions on the MOOC at least three times: once in the first two weeks (before the “second-wave drop-off”[1] kicks in), once around the end, and once a couple of months afterwards (once the euphoria of having passed the course is over and people have had time to start forgetting the course − I assume here that the principal objective in taking MOOCs is to learn things for the longer term rather than get a certificate and quickly forget everything). So for each course, we have three outcomes: a “stickiness value” (do students stick to the course?), an immediate appreciation score, and a medium-term appreciation score. Using all that data, we can build classification regression models to find out what characteristics are predictors of our three outcomes.
(We could also try to cluster the students according to their preferences; and then tailor courses towards a number of student profiles, eventually.)
But then of course, I don't have a large dataset. I have, in fact, a piss-poor sample of just myself, with all the biases that entails and less data points than classification criteria. So let's forget about the scientific methods for now, and just review some of the defining characteristics of the MOOCs I've taken and from that sample, try to draw conclusions about what I like and what I don't like in a MOOC.
What's in a MOOC?
(That which we call a great course, by any other token would smell as sweet… er…)
Here are our classification criteria:
- Real course: is this course based off a “real” course at the institution in question? Levels: No, Partly, Fully, Actual course runs concurrently;
- Multi-part course (e.g. Statistics at Berkeley);
- Part of a sequence of courses;
- Length of course: in weeks (for multi-part, take the total)
- Lecture type: Classroom videos, Talking head, Talking head with infographics/animations, Slides with voice-over, “Slate” (or tablet-style). In case of a mix, we'll just go with the most common type. (In a thorough analysis we'd have to quantify the mix, e.g. 40% talking head, 60% slides with voice-over);
- Native English-speaking lecturer(s): Yes, No. (In a full study we'd have to switch this to native speaker of the course language, and add some data about subtitles in other languages);
- Speed of delivery: does the lecturer speak fast or slowly? To simplify, I'll cluster in the “fast” group the lecturers who significantly use hand/body language as accents;
- Tone: Conversational, Dry, Passionate (note that the latter is highly correlated with a fast delivery, so that might not be such a great criterium);
- Printable resources: None, Transcript, Lecture Slides, Outline/summary;
- Links to additional material: e.g. research papers
- Length of video segments (average): 0-5 minutes, 5-15 minutes, 15-25 minutes, > 25 minutes
- Length of video segments (maximum): 0-5 minutes, 5-15 minutes, 15-25 minutes, > 25 minutes
- Assignment difficulty: from 1 to 5: 1 is very easy (immediate answer), 5 requires multiple hours of work, roughly, so that's not so simple: quizzes are quick to answer but may be hard, interactive tools may take longer to use while the problem is simple overall. So it's a mostly subjective point.
The rest of the criteria are simple yes/no:
- Quick questions between video segments
- Ungraded practice problems or worked examples
- Homework: quizzes
- Homework: numeric / formula input
- Homework: essays
- Homework: code / programming / offline tool using
- Homework: online, interactive, custom tools
- Homework: multiple tries allowed
- Midterm(s)
- Final exam
- “Collaborative” focus on assignments: e.g. dedicated forums for each problem in a set, etc.
- Guided discussion on forums
Note that I count as one “course” the sum of all parts of a multi-part course. For instance, CS169 SaaS at Berkeley is only one course (it doesn't matter that students get two certificates). But self-contained courses that are part of a series count as distinct courses. There's a part of subjectivity here, but broadly, I count CS169 and Stat2X at Berkeley, or BIOC372x at Rice, as multi-part, while I have ANU's Astrophysics courses and Harvard's MCB80x courses, as multiple courses in a sequence.
My MOOCs
So that's a good handful of criteria! Let's see how (some of) my MOOCs fare. I've actually tabulated it all but Blogger doesn't seem to support uploading of random file types, so until I find someplace to host it, here's an unscientific look at the data:
The top-scoring courses (MIT's 7.00x, Berkeley's CS169x, Caltech's Ec1011x, and ANU's ASTRO1x[2]) are a mixed bunch. They all have fast speakers with either a passionate or conversational tone. Apart from the Astrophysics class, they are based on actual courses (Caltech's one was even run concurrently with the on-campus class). The first two had classroom videos, Caltech's was mostly slate / tablet, ANU's is about 2/3rds of the time lecturers in front of infographics and 1/3rd slate.
The top-scoring courses (MIT's 7.00x, Berkeley's CS169x, Caltech's Ec1011x, and ANU's ASTRO1x[2]) are a mixed bunch. They all have fast speakers with either a passionate or conversational tone. Apart from the Astrophysics class, they are based on actual courses (Caltech's one was even run concurrently with the on-campus class). The first two had classroom videos, Caltech's was mostly slate / tablet, ANU's is about 2/3rds of the time lecturers in front of infographics and 1/3rd slate.
As far as homework is concerned, they all allowed multiple trials for most of the problems. The nature of the homework is varied: 7.00's shone through its use of custom interactive tools, CS169 was heavily about programming (well, that's the point of the course, innit?) and the other two are focused on numeric or formula input. Apart from ASTRO1, the difficulty or homework duration was very much on the high side; Ec1011 was clearly the most difficult course I took and I did spend many hours on the other two.
All courses put a focus on collaboration between students, by putting down links to the relevant forum sections on the appropriate pages.
Individually, the courses shine in different ways:
Rice's BIOC372.1x narrowly misses a top rating because of the quiz-based homework with only one try. That doesn't mean it's impossible to get a good grade (I did), but it makes doing the quizzes a chore more than a learning opportunity in its own right. I appreciate the nature of Immunology means it's more about memorizing things than acquiring problem-solving skills, but I'm sure there are ways to make the homework less annoying.
Harvard's SPU27x gets an honourable mention too, although I dropped the homework (wasn't interested enough) and downgraded to auditing the course rather than passing it. Basically, the course itself (teaching about science through the medium of cookery) is a great idea, and the demonstrations by guest chefs were great. Some of the labs were kind of interesting (molten chocolate cake has become a household classic) though I skipped through most (I wasn't particularly interested in measuring the elastic modulus of steak as it cooks, for instance).
The worst classes are the ones I dropped, i.e. Mount Sinai's Introduction to Systems Biology, Toronto's Bioinformatics Methods, and Harvard's PH525 Data Analysis for Genomics. Toronto's I won't go into too much detail about, basically the course wasn't what I expected or needed (it's better seen as a hands-on companion to a bioinformatics course; I guess I could try it again now I've finished Peking U's introduction to bioinformatics).
Mount Sinai's Introduction to Systems Biology suffered from long, purely slide-based lectures with a voice-over delivered in a sing-song voice, explaining (badly) ideas that are very complex in nature, making me feel completely out of my depth; I had to watch each segment two or three times to gain an understanding. The poor quality of the recordings (you could hear background noise such as police sirens driving by…) didn't help a bit. There wasn't really any homework besides a weekly quiz and a couple peer-graded questions. I clung on for two weeks then decided my sanity was worth more than that. It's a shame, as (in the absence of a MIT Biology XSeries…) Mount Sinai's Systems Biology 5-course Specialization looks like the best match for where I'd like to take my career (somewhere on the intersection of computer science, big data, and biology/life science).
Harvard's PH525 is a different kettle of fish. I just didn't have the mental bandwidth to commit the required effort to the course. The lack of actual homework for the first two weeks (just “understanding checks” in the form of quizzes) didn't help me getting involved. Also, Prof Irizarri had a tendency to sway from side to side in the lectures; since I mostly watch lectures on the bus or train, it made me seasick[3].
All courses put a focus on collaboration between students, by putting down links to the relevant forum sections on the appropriate pages.
Individually, the courses shine in different ways:
- 7.00's excellent lectures, additional videos, and immense wealth of tools make it by far the best course ever, be it a MOOC or an actual class, in my experience. To say I'm anticipating this summer's 7.QWB with some trepidation would be an understatement; I wish MIT's Biology department put up an XSeries.
- CS169 was good because of the subject matter, of the passion the lecturers put into the course, and because of the programming assignments. There were a couple of quizzes that were broadly speaking a let-down. I also appreciate that there is no exam: the homework is sufficient. The forums were pretty good too.
- Ec1011's big selling point is the homework's difficulty. You get to spend many hours on it every week (roughly speaking, I spent all Saturday mornings doing Ec1011 homework for the duration of the course, sometimes overflowing well into the afternoon); Prof Rangel's philosophy of “mastery teaching” is great: you get a large number of trials (10) for each problem and you're encouraged to discuss the problems on the forums, as long as you don't actually post the solutions. Overall, it means you get intimately familiar with the (albeit simple) models economists use.
- ASTRO1 is not so much about grading, as I wrote earlier, so the homework tends to be ridiculously easy. But the lectures are great (thanks to the lecturers' enthusiasm), the accompanying material (reference notes, worked examples, etc.) is very good, and the most brilliant idea is the weekly mystery that I've mentioned before; along with the accompanying forums it means we do some intriguing collaborative problem-solving that integrates everything we've learned in the class.
Rice's BIOC372.1x narrowly misses a top rating because of the quiz-based homework with only one try. That doesn't mean it's impossible to get a good grade (I did), but it makes doing the quizzes a chore more than a learning opportunity in its own right. I appreciate the nature of Immunology means it's more about memorizing things than acquiring problem-solving skills, but I'm sure there are ways to make the homework less annoying.
Harvard's SPU27x gets an honourable mention too, although I dropped the homework (wasn't interested enough) and downgraded to auditing the course rather than passing it. Basically, the course itself (teaching about science through the medium of cookery) is a great idea, and the demonstrations by guest chefs were great. Some of the labs were kind of interesting (molten chocolate cake has become a household classic) though I skipped through most (I wasn't particularly interested in measuring the elastic modulus of steak as it cooks, for instance).
The worst classes are the ones I dropped, i.e. Mount Sinai's Introduction to Systems Biology, Toronto's Bioinformatics Methods, and Harvard's PH525 Data Analysis for Genomics. Toronto's I won't go into too much detail about, basically the course wasn't what I expected or needed (it's better seen as a hands-on companion to a bioinformatics course; I guess I could try it again now I've finished Peking U's introduction to bioinformatics).
Mount Sinai's Introduction to Systems Biology suffered from long, purely slide-based lectures with a voice-over delivered in a sing-song voice, explaining (badly) ideas that are very complex in nature, making me feel completely out of my depth; I had to watch each segment two or three times to gain an understanding. The poor quality of the recordings (you could hear background noise such as police sirens driving by…) didn't help a bit. There wasn't really any homework besides a weekly quiz and a couple peer-graded questions. I clung on for two weeks then decided my sanity was worth more than that. It's a shame, as (in the absence of a MIT Biology XSeries…) Mount Sinai's Systems Biology 5-course Specialization looks like the best match for where I'd like to take my career (somewhere on the intersection of computer science, big data, and biology/life science).
Harvard's PH525 is a different kettle of fish. I just didn't have the mental bandwidth to commit the required effort to the course. The lack of actual homework for the first two weeks (just “understanding checks” in the form of quizzes) didn't help me getting involved. Also, Prof Irizarri had a tendency to sway from side to side in the lectures; since I mostly watch lectures on the bus or train, it made me seasick[3].
Conclusions and reflexions
Based on that skewed analysis of the MOOCs I did, can I draw conclusive, er, conclusions about what makes a good MOOC? Not really, but I can put forward a few points:
- Classroom videos are better, as they make me connect to the course more. Failing that, “slate” (tablet-style) or lecturers-with-infographics (ANU's ASTRO1 is a good example of that) does the trick.
- Shortish (about 10 minutes) video segments interspersed with quick questions, please.
- The speaking qualities of the lecturers are obviously of great importance.
- Downloadable or printable resources are very welcome. Pointers to additional materials are too, but somewhat less.
- Homework should be seen as a learning opportunity in its own right, so rather than focus on checking that students have learned the lesson, they should be more in a problem-solving format. Homework should be hard (or at least, long), but multiple tries should be allowed and collaboration between students on the homework should be encouraged.
- I'm frankly doubtful about the overall utility of final exams in the grand scheme of things. I think something like ASTRO1's “mystery” is the gold standard: a recurring problem with additional hints every week, that allows students to integrate every lesson's knowledge in order to bring about final understanding. Note that this approach is very well-suited to programming classes, too.
- I am broadly indifferent to a course being offered all in one go or split into two or three parts. I suppose splitting means people are more likely to register (it doesn't feel like committing to 10 or 15 weeks' worth of work). I don't care much, if anything I'd prefer everything in one go (no need to register twice, no risk to see the second half of a course rescheduled to the other half of the year.)
Now to take this further… Anybody knows of good public datasets about MOOCs? Were studies made to measure student's opinions of MOOCs multiple months after the courses have ended?
You can find links to most of the courses I mention in the Completed courses page. As for the others:
References
You can find links to most of the courses I mention in the Completed courses page. As for the others:
Footnotes
[1] This is totally unsubstantiated, but I'd think there are two initial waves of dropping-outs: one the very first week and possibly even before that, when students realize this course isn't for them (wrong difficulty level, wrong appreciation of the subject, etc.); and one closely afterwards, when students basically throw their hands in the air and decide that although they are interested in the subject, the MOOC itself doesn't fit their requirements (it is a “bad MOOC” from their perspective). Here, I am concerned about this “second wave” − why makes people give up on a MOOC on a topic they are interested in?
Of course there are other drop-out causes: lack of time, interference from the real world or indeed other MOOCs, a late realization that the subject isn't so interesting after all, etc. But these would tend, I believe, to be more or less evenly spread out throughout the course program.
[2] Sometime I'll just drop the systematic “x” suffix in edX course labels.
[3] Hey, I didn't say I had only good reasons to drop a course!
No comments:
Post a Comment