Tuesday, April 13, 2010

Moodle for electronic assignments

Ron Sharma from USQ is at ANU talking on "Using Moodle, electronic assignment and other tools to support Engineering students" (after a general workshop yesterday). USQ changed from Web CT to the Moodle Learning Management System several years ago. ANU is still making that transition. One difference is that USQ has a central set of standards, whereas this is devolved at ANU to the colleges (Oxbridge style). At USW assignments are electronically set, submitted, marked and returned to the students. This is necessary as the external student numbers are increasing rapidly. It is therefore infeasible to have students submit assignments on paper. Also USQ has to respond to the needs of industry, such as the energy sector, for engineering education.

At USQ the same Moodle template is used for all courses at the university, with the same standard information resources, such as contacts for the course and assessment. This is provided at the top of the main screen for each Moodle course. The specific course content then follows below. This looks to me a very good approach, so that students knew where to find the information for each course . Having standard conditions for assignment submission saves confusion.

Professor Sharma reported that using the LMS resulted in a reduction in lecture and tutorial attendance from 70% to 30%. This seems in line with my experience. He suggested that just putting lecture notes and audio recordings online was not sufficient: it was necessary to also provide online tutorial materials. This makes sense to me and I was assuming that about 25% of the students would still want to attend in person. The issue then is from a business point of view is if the university can afford to support those students. My suggestion for ANU was to replace the large lecture theaters (which typically easiest 100 or more), with small ones which seat about 24 students. This would be sufficient students to have a class and provide an audience for the lecturer to perform to for audio and video recording.

Professor Sharma teaching energy auditing to the engineering students. This is similar to he green ICT energy audit I teach in Green Technology Strategies.

Labels: , , ,

Wednesday, November 25, 2009

Examiner's meetings for E-learning courses

My students have completed the first COMP7310 Green ICT course at ANU. Their marks have been allocated. At this point the university requires that the person running the course and a "second examiner" agree the marks. The results are then presented to an examiner's meeting where others examine the marks. This was an experience I did not look forward to. In effect now I have finished assessing the students, my peers were going to assess me and see if the course and the results were credible.

The experience turned out to be not as bad as I was expecting. There were about six people present for the School of Computer Science (SOCS) masters meeting. A senior academic chaired the meeting and an assistant operated a laptop connected to a projection screen. This displayed course details stored in a bespoke system created by Bob Edwards at SOCS called "FAculty Information System" (FAIS) .

Each course was considered in turn. The results for all students were displayed on screen for a course, sorted in descending order by mark. The system displays the raw mark of each student, any scaling applied and the resulting grade. Also displayed is the average mark, standard deviation and frequency of each grade. Group members then ask to see details of particular students, usually those on the boundary between one grade and the next. The grades for all courses in Computer Science for this student can be displayed, to see if the mark for the course in question is consistent.

In the case of my own course I was worried that I had been too generous with the marks.; So I scaled them to the notional ANU average (65/100). My colleagues reassured me that it was up to me to decide if the marking reflected the correct result, not some statistical measure. The scaling was adjusted to make it less harsh. This is done by entering a PHP function into the system, at which point all of the marks are rescaled and the statistics recalculated. The group found the result acceptable and along with the second examiner, I was able to sign off on the results.

This process worked well and I have suggested that this functionality be added to the ANU's Moodle system (called "Wattle"). There may well be some Moodle add-on which already does this (some is covered in Daniel Servos' Google Summor of Code project: " Student projects/Animated grade statistics report. Also Dr. Eric McCreath has also produced a Marker program which could also be added to Moodle. In addition the system could be sued for analysis of overall student progress trends and for specific topics.

It should be noted that the information from the examiner's board is then entered into the student administration system. So some way to transfer marks from Moodle to the ANU Student Administration System would be useful. ANU uses PeopleSoft Enterprise Student Administration software, and PeopleSoft claim to be able to do some Moodle integration. Also it would be useful to be able to extract information from the PropleSoft system for the examiner's board (as Moodle will only have information on recent courses).

The administrative processes will also need to be adjusted slightly to allow for flexible learning. The current process assumes that all staff can attend a meeting in person. Where courses are designed and delivered flexibly, the staff involved my not be on the campus. The obvious solution would be a meeting by video conference with the web based marking system being able to support this well. However, a real time meeting would still be inconvenient for people is different time zones and an alternative form based approach should be feasible.

Labels: , ,

Monday, August 24, 2009

Evaluation of Online Learning

The report "Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies" from the US Department of Education suggests that online learning is more effective than face to face classroom learning, that blended learning is no more effective than purely online learning and that video and online quizzes do not improve online learning.

It should be noted that the US study has limitations: it is a "meta" analysis, that is analysis of previous results, not new data collection. Also this was not covering K-12 students and may only be applicable to vocational, university or adult learning.

That said, the report supports the approach I have been using in Green ICT Strategies and will be discussing in Mentored and Collaborative e-Learning for Postgraduate Professional Education, for the Computer Science Seminar, The Australian National University, Computer Science and Information Technology Building, Canberra, 4pm, 27 August 2009.

The ful US report is 93 page (819 Kbytes) of PDF. Here is the executive summary, minus footnotes:
Executive Summary

Online learning—for students and for teachers—is one of the fastest growing trends in educational uses of technology. The National Center for Education Statistics (2008) estimated that the number of K-12 public school students enrolling in a technology-based distance education course grew by 65 percent in the two years from 2002-03 to 2004-05. On the basis of a more recent district survey, Picciano and Seaman (2009) estimated that more than a million K–12 students took online courses in school year 2007–08.

Online learning overlaps with the broader category of distance learning, which encompasses earlier technologies such as correspondence courses, educational television and videoconferencing. Earlier studies of distance learning concluded that these technologies were not significantly different from regular classroom learning in terms of effectiveness. Policy-makers reasoned that if online instruction is no worse than traditional instruction in terms of student outcomes, then online education initiatives could be justified on the basis of cost efficiency or need to provide access to learners in settings where face-to-face instruction is not feasible. The question of the relative efficacy of online and face-to-face instruction needs to be revisited, however, in light of today’s online learning applications, which can take advantage of a wide range of Web resources, including not only multimedia but also Web-based applications and new collaboration technologies. These forms of online learning are a far cry from the televised broadcasts and videoconferencing that characterized earlier generations of distance education. Moreover, interest in hybrid approaches that blend in-class and online activities is increasing. Policy-makers and practitioners want to know about the effectiveness of Internet-based, interactive online learning approaches and need information about the conditions under which online learning is effective.

The findings presented here are derived from (a) a systematic search for empirical studies of the effectiveness of online learning and (b) a meta-analysis of those studies from which effect sizes that contrasted online and face-to-face instruction could be extracted or estimated. A narrative summary of studies comparing different forms of online learning is also provided.

These activities were undertaken to address four research questions:
  1. How does the effectiveness of online learning compare with that of face-to-face instruction?
  2. Does supplementing face-to-face instruction with online instruction enhance learning?
  3. What practices are associated with more effective online learning?
  4. What conditions influence the effectiveness of online learning?
This meta-analysis and review of empirical online learning research are part of a broader study of practices in online learning being conducted by SRI International for the Policy and Program Studies Service of the U.S. Department of Education. The goal of the study as a whole is to provide policy-makers, administrators and educators with research-based guidance about how to implement online learning for K–12 education and teacher preparation. An unexpected finding of the literature search, however, was the small number of published studies contrasting online and face-to-face learning conditions for K–12 students. Because the search encompassed the research literature not only on K–12 education but also on career technology, medical and higher education, as well as corporate and military training, it yielded enough studies with older learners to justify a quantitative meta-analysis. Thus, analytic findings with implications for K–12 learning are reported here, but caution is required in generalizing to the K–12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).

This literature review and meta-analysis differ from recent meta-analyses of distance learning in that they
  • Limit the search to studies of Web-based instruction (i.e., eliminating studies of video- and audio-based telecourses or stand-alone, computer-based instruction);
  • Include only studies with random-assignment or controlled quasi-experimental designs; and
  • Examine effects only for objective measures of student learning (e.g., discarding effects for student or teacher perceptions of learning or course quality, student affect, etc.).
This analysis and review distinguish between instruction that is offered entirely online and instruction that combines online and face-to-face elements. The first of the alternatives to classroom-based instruction, entirely online instruction, is attractive on the basis of cost and convenience as long as it is as effective as classroom instruction. The second alternative, which the online learning field generally refers to as blended or hybrid learning, needs to be more effective than conventional face-to-face instruction to justify the additional time and costs it entails. Because the evaluation criteria for the two types of learning differ, this meta-analysis presents separate estimates of mean effect size for the two subsets of studies.

Literature Search

The most unexpected finding was that an extensive initial search of the published literature from 1996 through 2006 found no experimental or controlled quasi-experimental studies that both compared the learning effectiveness of online and face-to-face instruction for K–12 students and provided sufficient data for inclusion in a meta-analysis. A subsequent search extended the time frame for studies through July 2008.

The computerized searches of online databases and citations in prior meta-analyses of distance learning as well as a manual search of the last three years of key journals returned 1,132 abstracts. In two stages of screening of the abstracts and full texts of the articles, 176 online learning research studies published between 1996 and 2008 were identified that used an experimental or quasi-experimental design and objectively measured student learning outcomes. Of these 176 studies, 99 had at least one contrast between an included online or blended learning condition and face-to-face (offline) instruction that potentially could be used in the quantitative meta-analysis. Just nine of these 99 involved K–12 learners. The 77 studies without a face-to-face condition compared different variations of online learning (without a face-to-face control condition) and were set aside for narrative synthesis.

Meta-Analysis

Meta-analysis is a technique for combining the results of multiple experiments or quasi-experiments to obtain a composite estimate of the size of the effect. The result of each experiment is expressed as an effect size, which is the difference between the mean for the treatment group and the mean for the control group, divided by the pooled standard deviation. Of the 99 studies comparing online and face-to-face conditions, 46 provided sufficient data to compute or estimate 51 independent effect sizes (some studies included more than one effect). Four of the nine studies involving K–12 learners were excluded from the meta-analysis: Two were quasi-experiments without statistical control for preexisting group differences; the other two failed to provide sufficient information to support computation of an effect size.

Most of the articles containing the 51 effects in the meta-analysis were published in 2004 or more recently. The split between studies of purely online learning and those contrasting blended online/face-to-face conditions against face-to-face instruction was fairly even, with 28 effects in the first category and 23 in the second. The 51 estimated effect sizes included seven contrasts from five studies conducted with K–12 learners—two from eighth-grade students in social studies classes, one for eighth- and ninth-grade students taking Algebra I, two from a study of middle school students taking Spanish, one for fifth-grade students in science classes in Taiwan, and one from elementary-age students in special education classes. The types of learners in the remaining studies were about evenly split between college or community college students and graduate students or adults receiving professional training. All but two of the studies involved formal instruction.

The most common subject matter was medicine or health care. Other content
types were computer science, teacher education, mathematics, languages, science, social science, and business. Among the 49 contrasts from studies that indicated the time period over which instruction occurred, 19 involved instructional time frames of less than a month, and the remainder involved longer periods. In terms of instructional features, the online learning conditions in these studies were less likely to be instructor-directed (8 contrasts) than they were to be student-directed, independent learning (17 contrasts) or interactive and collaborative in nature (23 contrasts).

Effect sizes were computed or estimated for this final set of 51 contrasts. Among the 51 individual study effects, 11 were significantly positive, favoring the online or blended learning condition. Two contrasts found a statistically significant effect favoring the traditional face-to-face condition.

Narrative Synthesis

In addition to the meta-analysis comparing online learning conditions with face-to-face instruction, analysts reviewed and summarized experimental and quasi-experimental studies contrasting different versions of online learning. Some of these studies contrasted purely online learning conditions with classes that combined online and face-to-face interactions. Others explored online learning with and without elements such as video, online quizzes, assigned groups, or guidance for online activities. Five of these studies involved K–12 learners.

Key Findings

The main finding from the literature review was that
  • Few rigorous research studies of the effectiveness of online learning for K–12 students have been published. A systematic search of the research literature from 1994 through 2006 found no experimental or controlled quasi-experimental studies comparing the learning effects of online versus face-to-face instruction for K–12 students that provide sufficient data to compute an effect size. A subsequent search that expanded the time frame through July 2008 identified just five published studies meeting meta-analysis criteria.
  • Students who took all or part of their class online performed better, on average, than those taking the same course through traditional face-to-face instruction. Learning outcomes for students who engaged in online learning exceeded those of students receiving face-to-face instruction, with an average effect size of +0.24 favoring online conditions. The mean difference between online and face-to-face conditions across the 51 contrasts is statistically significant at the p < .01 level.4 Interpretations of this result, however, should take into consideration the fact that online and face-to-face conditions generally differed on multiple dimensions, including the amount of time that learners spent on task. The advantages observed for online learning conditions therefore may be the product of aspects of those treatment conditions other than the instructional delivery medium per se.
  • Instruction combining online and face-to-face elements had a larger advantage relative to purely face-to-face instruction than did purely online instruction. The mean effect size in studies comparing blended with face-to-face instruction was +0.35, p < .001. This effect size is larger than that for studies comparing purely online and purely face-to-face conditions, which had an average effect size of +0.14, p < .05. An important issue to keep in mind in reviewing these findings is that many studies did not attempt to equate (a) all the curriculum materials, (b) aspects of pedagogy and (c) learning time in the treatment and control conditions. Indeed, some authors asserted that it would be impossible to have done so. Hence, the observed advantage for online learning in general, and blended learning conditions in particular, is not necessarily rooted in the media used per se and may reflect differences in content, pedagogy and learning time.
  • Studies in which learners in the online condition spent more time on task than students in the face-to-face condition found a greater benefit for online learning. The mean effect size for studies with more time spent by online learners was +0.46 compared with +0.19 for studies in which the learners in the face-to-face condition spent as much time or more on task (Q = 3.88, p < .05).6
  • Most of the variations in the way in which different studies implemented online learning did not affect student learning outcomes significantly. Analysts examined 13 online learning practices as potential sources of variation in the effectiveness of online learning compared with face-to-face instruction. Of those variables, (a) the use of a blended rather than a purely online approach and (b) the expansion of time on task for online learners were the only statistically significant influences on effectiveness. The other 11 online learning practice variables that were analyzed did not affect student learning significantly. However, the relatively small number of studies contrasting learning outcomes for online and face-to-face instruction that included information about any specific aspect of implementation impeded efforts to identify online instructional practices that affect learning outcomes.
  • The effectiveness of online learning approaches appears quite broad across different content and learner types. Online learning appeared to be an effective option for both undergraduates (mean effect of +0.35, p < .001) and for graduate students and professionals (+0.17, p < .05) in a wide range of academic and professional studies. Though positive, the mean effect size is not significant for the seven contrasts involving K–12 students, but the number of K–12 studies is too small to warrant much confidence in the mean effect estimate for this learner group. Three of the K–12 studies had significant effects favoring a blended learning condition, one had a significant negative effect favoring face-to-face instruction, and three contrasts did not attain statistical significance. The test for learner type as a moderator variable was nonsignificant.
  • Effect sizes were larger for studies in which the online and face-to-face conditions varied in terms of curriculum materials and aspects of instructional approach in addition to the medium of instruction. Analysts examined the characteristics of the studies in the meta-analysis to ascertain whether features of the studies’ methodologies could account for obtained effects. Six methodological variables were tested as potential moderators: (a) sample size, (b) type of knowledge tested, (c) strength of study design, (d) unit of assignment to condition, (e) instructor equivalence across conditions, and (f) equivalence of curriculum and instructional approach across conditions. Only equivalence of curriculum and instruction emerged as a significant moderator variable (Q = 5.40, p < .05). Studies in which analysts judged the curriculum and instruction to be identical or almost identical in online and face-to-face conditions had smaller effects than those studies where the two conditions varied in terms of multiple aspects of instruction (+0.20 compared with +0.42, respectively). Instruction could differ in terms of the way activities were organized (for example as group work in one condition and independent work in another) or in the inclusion of instructional resources (such as a simulation or instructor lectures) in one condition but not the other.
The narrative review of experimental and quasi-experimental studies contrasting different online learning practices found that the majority of available studies suggest the following:
  • Blended and purely online learning conditions implemented within a single study generally result in similar student learning outcomes. When a study contrasts blended and purely online conditions, student learning is usually comparable across the two conditions.
  • Elements such as video or online quizzes do not appear to influence the amount that students learn in online classes. The research does not support the use of some frequently recommended online learning practices. Inclusion of more media in an online application does not appear to enhance learning. The practice of providing online quizzes does not seem to be more effective than other tactics such as assigning homework.
  • Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection. Studies indicate that manipulations that trigger learner activity or learner reflection and self-monitoring of understanding are effective when students pursue online learning as individuals.
  • Providing guidance for learning for groups of students appears less successful than does using such mechanisms with individual learners. When groups of students are learning together online, support mechanisms such as guiding questions generally influence the way students interact, but not the amount they learn.
Conclusions

In recent experimental and quasi-experimental studies contrasting blends of online and face-to-face instruction with conventional face-to-face classes, blended instruction has been more effective, providing a rationale for the effort required to design and implement blended approaches. Even when used by itself, online learning appears to offer a modest advantage over conventional classroom instruction.

However, several caveats are in order: Despite what appears to be strong support for online learning applications, the studies in this meta-analysis do not demonstrate that online learning is superior as a medium, In many of the studies showing an advantage for online learning, the online and classroom conditions differed in terms of time spent, curriculum and pedagogy. It was the combination of elements in the treatment conditions (which was likely to have included additional learning time and materials as well as additional opportunities for collaboration) that produced the observed learning advantages. At the same time, one should note that online learning is much more conducive to the expansion of learning time than is face-to-face instruction.

In addition, although the types of research designs used by the studies in the meta-analysis were strong (i.e., experimental or controlled quasi-experimental), many of the studies suffered from weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and, in many cases, potential bias stemming from the authors’ dual roles as experimenters and instructors.

Finally, the great majority of estimated effect sizes in the meta-analysis are for undergraduate and older students, not elementary or secondary learners. Although this meta-analysis did not find a significant effect by learner type, when learners’ age groups are considered separately, the mean effect size is significantly positive for undergraduate and other older learners but not for K–12 students.

Another consideration is that various online learning implementation practices may have differing effectiveness for K–12 learners than they do for older students. It is certainly possible that younger students could benefit more from a different degree of teacher or computer-based guidance than would college students and older learners. Without new random assignment or controlled quasi-experimental studies of the effects of online learning options for K–12 students, policy-makers will lack scientific evidence of the effectiveness of these emerging alternatives to face-to-face instruction. ...

From: "Evaluation of Evidence-Based Practices in Online Learning
A Meta-Analysis and Review of Online Learning Studies
" by Barbara Means, Yukie Toyama, Robert Murphy, Marianne Bakia, Karla Jones, Center for Technology in Learning, published by US Department of Education, Office of Planning, Evaluation, and Policy Development, Policy and Program Studies Service, May 2009 (footnotes have been omitted from this excerpt).

Labels: , , ,

Wednesday, April 15, 2009

Is Blended Learning the Future of Higher Education?

Mike KeppellProfessor Mike Keppell afternoon workshop at ANU was "Blended Learning: The Future of Higher Education". The material was mostly familar to me from having prepared blended learning courses. Some tricky issues include the expectations of the students and staff: "am I getting my money's worth", "is this a real course?", "all the answers are on the web".

Having used blended learning some of the issues raised in the workshop seemed to be things that had been settled. As an example one surprising issue was if on-campus students should have access to the materials provided for distance education students. To me it seemed obvious you would provide the materials developed for distance education to the local students. Distance education materials are expensive to develop and will likely be of higher quality than the usual ad-hoc lecture notes. However, there may be an equity issue in that the local students will then have an advantage. This seemed to me a silly argument, but still one current at some places.

Another issue was sufficient access to online material. If video is used, then a high speed internet connection is needed. To me this seemed a non-issue. For disabled access the material will need to be provided in different formats and as a by-product this will include low bandwidth access. In addition the student should be informed what Internet access they need before the enrol.

Mike also mentioned affordances that is making tools obvious as to its use. There are new designs of teaching spaces to allow for lectures and discussion. I have looked at this extensively with flexible learning centres.

Labels: , , , ,

Learning orientated assessment

Mike KeppellProfessor Mike Keppell, Director of the Flexible Learning Institute at Charles Sturt University is in Canberra to deliver some workshops on learning at ANU. This morning it is "Transforming Higher Education through Learning-Orientated Assessment" in the afternoon there is "Blended Learning: The Future of Higher Education".

In the assessment workshop Mike's emphasis was that assessment could be used as part of the learning (formative assessment) , not just at the end to evaluate the student (summative assessment). He also argued for starting with the assessment and making sure it fitted in with what was in the course. In retrospect these ideas look obvious, but when in the details of designing a course when under pressure, there can be a tendency to just tack the assessment on the end.

Some of the terminology sued I found a bit jarring. As an example Mike used "feed-forward" to indicate that assessment should provide the student with useful feedback to help them later in the course, not just for a final mark. But technically speaking the term "feedback" indicates that "feed-forward feedback" is a tautology.

One issue relevant to ANU not discussed so far is the role of research. The ANU is a research university and has emphasised research in learning. This is very different to CSU and other teaching orientated universities. The other issue is undergraduate versus postgraduate education. This is not so much about the formal education the students have but their assumed maturity.

The workshop was very useful for me to come to terms with some of the educational theory which I had found frustrating. One example is the use of a "Rubric" (subjective marking tool). I find these tools wordy and vague.

Labels: , , , ,