What can we learn from Stanford University’s free online computer science courses?
At the end of July 2011, Stanford University announced that three introductory one-term undergraduate courses would be available free as online distance learning courses during the October to December 2011 term. Each course is taught be people who are leading figures in their fields, and in some case more-or-less the leading figures.
Here are links to the descriptions of each of the courses:
- Machine Learning, taught by Professor Andrew Ng, Director of the Stanford Artificial Intelligence Lab, which is the main AI research organisation at Stanford University;
- Database Design, taught by Jennifer Widom, Professor and Chair of the Computer Science Department at Stanford University;
- Artificial Intelligence (AI), taught by Sebastian Thrun, Research Professor of Computer Science at Stanford University, and Peter Norvig, Director of Research at Google (who was a keynote speaker at the 2007 ALT Conference).
The official descriptions show that the first two courses are run fully as initiatives of the Stanford University School of Engineering, whereas the third is run as a partnership between a start-up company called Know Labs and the Stanford School of Engineering.
Helped by plenty of media coverage (including from the New York Times, The Daily Telegraph, and The Independent), tens of thousands signed up for each of the three courses: in the case of the AI course, around 90,000 had enrolled within two weeks of launch, rising eventually to over 130,000 from over 190 countries, with a median age of ~30. Around 50,000 had signed up respectively for Machine Learning and for Database Design.
What follows is an “interim report from the field” from a committed participant on the AI course, which commenced in October. It draws in part on the weekly reports I am writing about the subjective experience of being a learner on the AI course.
I think there are several overlapping reasons why people in FE and HE should take the Stanford initiative particularly seriously:
1. A “world class” institution with “world class” teachers seems be in the process of biting on the bullet, by making some of its undergraduate courses freely available globally, on really big scale, whether or not with a view to charging students for such courses in the future. Two insouciant tweets by Sebastian Thrun just before the course started may be indicative of what it to come: “Fun meeting with Stanford’s president today. Perhaps more courses will come online in the near future.” “Who here would love to get a CS Master’s degree online, if it is of Stanford quality and only costs $2000 in tuition? Please reply.”
2. Knowhow is being gained about:
- the processes that need to underpin successful very large-scale delivery;
- the actual costs of production and of delivery;
- how to make the content hosted on cloud services such as YouTube work seamlessly with assessment systems hosted by a course provider.
As an aside, the AI course is characterised by what I would describe as quirky production values, which are anything but slick – students seem not to be put off by this – and which create for the learner a very uncanny sense of receiving personal tuition. Here for example is the AI course’s 210 second explanation of Bayes’ Rule.
3. An understanding can be gained about how the effort of volunteers can be marshalled in tasks like translation. In the case of the AI course, over 2000 people volunteered as translators, and the course videos are now captioned in 18 languages. Volunteering might need incentivising by fee remission if a course is not free. Alongside this there will be scope to gain a better understanding of how informal peer support structures and processes can develop outside the formal organisation of a traditionally structured course of this kind, and the extent if any to which a course provider needs to take steps to “seed” peer support. In the case of the AI course there is a burgeoning substructure of informal discussion about and peer support for the course “out there” on the Internet. Some of this peer support is of very high quality; and it can be found on several different channels – including Reddit, Google Groups, and Google Plus – none of which have been organised by the course providers.
4. A mass of data is being collected about, for example:
- how learners on really large scale courses interact with content;
- the behaviours of learners as homework and other deadlines approach (yes, they do leave things to the last minute….);
- the effectiveness or otherwise of this kind of content for learning a mathematics based curriculum;
- the interplay between the difficulty of different parts of a course and drop-out rates. (Note that 46,000 students – about half the students enrolled on the advanced track – submitted homework by the week one deadline; the number submitting homework by the week one deadline had dropped to 37,000: there is evidently a substantial attrition rate, at least initially.)
Such data, especially when analysed by the kinds of people whose technical specialisms are machine learning and artificial intelligence, who’ve built the world’s first reliable driverless car (Sebastian Thrun – see this 2011 10 minute TED talk), or who are experts in machine translation (Peter Norvig – see this 2007 1 hour talk Theorizing from Data: Avoiding the Capital Mistake), should support R&D into automating the provision of personalised formative feedback, and personalising the provision of the instructional content itself. In 2008 the American National Academy of Engineering chose Advance Personalised Learning (alongside, for example, Provide Energy from Fusion) as one of 14 “grand engineering challenges” for the next 10-15 years. This is an indication that contrary to the glib and unpersuasive way the term personalisation has been used in British discourse about technology in learning: personalisation is going to be hard. Perhaps solving it will involve the kind of big data that the Stanford experiment will generate, and the minds of those involved in it.
I have been intensely interested in online distance learning for nearly half my working life, ever since (in pre-Web days) I had the luck to be responsible for the UK end of a Danish-led project involving the TUC and its Danish and Swedish counterparts to devise and run online courses for trade union representatives. That experience taught me enough to get deeply involved in online distance learning courses such as The South Yorkshire FE Consortium’s Learning to Teach On-Line course (LeTTOL), and The Sheffield College’s online GCSE English Course. Central to these courses (and to many other similar online distance courses) is the active and costly involvement of teachers, in day-to-day interaction with students.
The three mass courses offered by Stanford are, by definition, far too large in scale for there to be anything other than superficial interaction between students and teachers. But, if the AI course is anything to go by, what Stanford University has solved, with its short quirky and quiz-laden videos, is a way to give learners the feeling that they are receiving personal tuition, with plenty of scope alongside this for peer interaction.
There are many ways in which the AI course could be improved, but the underlying model feels right; what is more, it feels replicable for different academic levels and for different disciplines. But whereas Stanford can use its reputation (and its first mover advantage?) to bring together large enough numbers of learners to make courses such as this feasible, for all but a few providers the only way to get the numbers will be to collaborate. In the current climate of competition how will such collaboration be achieved? Without it there is every likelihood that individual providers will be too small to succeed.
Seb Schmoller – [email protected] – is chief executive of the Association for Learning Technology (ALT), an independent membership charity whose mission is to ensure that use of learning technology is effective and efficient, informed by research and practice, and grounded in an understanding of the underlying technologies and their capabilities, and the situations into which they are placed. Seb is also Vice-Chair of the Governing Body of The Sheffield College
Read other FE News articles by Seb Schmoller:
What did the ALT conference have to say for FE and Skills?
Urgent need to equip teachers with 21st Century technology skills
Responses