Tomorrow (14th January 2017) I’ve been asked to speak at one of the regular seminars we provide for doctoral students – I have an interest in educational technology, but I am also in general something of sceptic with regard to technology. Don’t get me wrong. I am not at saying that we shouldn’t make use of it, or that it doesn’t have potential. I’m just less than convinced by the claims of those who are trying to sell us the products we need to make it work.
In my view a technology mediated learning experience cannot possibly be the same as a face to face experience. So I thought I would try a little experiment. I only have 20 minutes to speak in the seminar, so I thought I would try and record what I am going to say, and use the PowerPoint slides to make an enhanced audio recording. On the face of it those who are there will get the same experience as those of you who are reading this and decide to click the link and watch the video. But I bet you don’t. If you are planning to be (or were) present at the seminar, I would be interested in getting feedback on which method you preferred and why. Of course, there is much more to technology mediated learning than this, but I’m hoping this might be the start of a discussion on what is realistic.
Here’s the (rough) transcript of the video. Doctoral Seminar 14th January 2017_script (It’s a Microsoft Word version. Please let me know via the comments if you would like a plain text format)
Just as a matter of interest I kept a record of the time I spent on developing this – in terms of the video production it took me about three hours to record, make the slides and edit it in Adobe Premiere. I used that so I could export it as an MP4 file to reduce the size – Yes, it would probably have been quicker to narrate it directly into PowerPoint, but that, or at least the version I am using, tends to generate very large .WAV files which can’t be uploaded to Youtube. At least you should be able to watch this on a mobile device.
One of the problems with any research endeavour is that you collect a lot of data. Not just the primary data you get from your interviews and so forth (though, if you’re doing it properly, there’ll be a lot of that), Rather, I am referring to the ideas that you generate as you read the literature.
I think students struggle with this. I know I do.
If you just make notes at random, you will eventually have to organise them, and to do that you need an organising principle. All the text books suggest that you should have a “conceptual framework” in advance and try and relate your reading to that. “Conceptual Framework” is one of those phrases that researchers use, a bit like “ontology, epistemology and axiology” to frighten new research students.
I’ll try and explain. I’m currently interested in the way information is managed inside Virtual Learning Environments. The reason for my interest is that students are often heard complaining that academic staff use Blackboard, or Moodle, or whatever it might be “inconsistently”. So the concept of “inconsistency” is one element of my conceptual framework. When I come across something I’m reading that talks about this, I can make a note of the author’s argument and whether I agree with it or not, and why. I might even help myself to a particularly pithy quotation (Keeping a record of where I got it from, of course).
That’s simple enough, except that one concept does not make a framework. The point is that you have to have multiple concepts and they have to be related to each other. Firstly in creating my framework I should probably define (to my own satisfaction) what I mean by “inconsistency”. It might be a rather hit and miss approach to the types of learning material provided (e.g. on one topic there’s a PowerPoint, on another there are two digitised journal articles, on another a PowerPoint and half finished Xerte object), or it might be that one member of the teaching team organises their material in a complex nest of folders, and another just presents a single item which goes on for pages and pages, or it might be that one of a students modules is organised into folders labelled by weeks (When did we study Monmouth’s Rebellion, was it week 19, or week 22?) or by the format it was taught (Now, where did she present those calculations – was it in the “lecture”, or the “seminar”?). So for the purposes of organising a conceptual framework it’s not so much defining inconsistency, as labelling types of inconsistency. You might say they’re dimensions of inconsistency.
Also as researchers we try to explain things. So it’s likely that much of the literature will offer explanations. That’s another part of our framework then – explanations, or perhaps we’ll label it “responsibility”. This inconsistency might be the teacher’s fault, for being technologically illiterate, not understanding the importance of information structures, or just being too idle to sort it out properly. Another researcher will argue that it’s the students’ own fault, because that’s the nature of knowledge, and if they spent more time applying themselves and less time on their iPhones…. I’m being a bit flippant to make the point that there are always many dimensions of any conceptual framework. You do have to make some decisions about what you’re interested in.
Even if you do, your framework will get quite complicated quite quickly, but it is a useful way of organising your notes, and ultimately will form the structure of your thesis, or article, or whatever it is you are preparing. Nor will you need all of it. You have to be quite ruthless about excluding data. But I’m getting ahead of myself. I should say why we need a conceptual framework for note making.
One of the problems of making notes is that it tends to be a bit hit and miss. If you’re working at your computer, you probably have lots of files, (though you may not be able to find them, or remember what’s in them) but if an idea hits you on the train, or in the kitchen, or someone else’s office you might enter it on a note app on your phone, scrawl it on a post it, say something into a digital recorder, take a photo of it, or you might, as I do, rather enjoy writing into a proper old fashioned notebook. The result is that, conceptual framework or not, you have a chaotic mess of notes. To bring some order to this I recommend the excellent (and free) Evernote, (which is available for virtually every conceivable mobile device, and synchronises across all of them) and though I do like fountain pens and paper, Evernote is my main note making tool. (Incidentally this blog post started life as an Evernote note, as I was thinking about my own conceptual framework – I thought it would be helpful to my students to share this) As with any digital tool it is only as good as the way you use it. Which takes me back to the conceptual framework. Evernote allows you to create as many “notebooks” as you like, and keep these in “stacks”. Think of a filing cabinet full of manila folders as a stack of notebooks. But you can also add tags to all your notes which is a way of indexing your folders. (E.g. if you had a filing cabinet full of folders on inconsistency in VLEs, red paper clips attached to the folders might indicate the presence of a document indicating teacher responsibility for it, and a green clip indicate the presence of documents arguing about student responsibilities). Obviously with verbal tags you can have as many “coloured clips” as you like.
You do of course have to tag your folders consistently, and you have to bring all your notes together. No matter how good your digital note management app is it can’t really do anything about the folded post it note in your back pocket. So good practice for a research student, is, once a week, to bring all your notes together, think about your categories and your tags. (if you do use Evernote as I’ve suggested you will also be able to print a list of tags, which will help you develop a much more sophisticated conceptual framework)
The other day I blogged about the gap between the theory of providing material via an institutional VLE from the perspective of an educational developer and the reality of doing so as I experienced it as an academic. My feeling was that most of us, (academics, that is, though I reiterate, by no means all academics), tend to see it as a content repository, and many students tend to regard it in the same light. Now as it happens, there has been a recent and very interesting debate about the purpose of VLEs on the ALT Jiscmail list. One of the points made there was that the VLE tends to shape our way of thinking about technology, and I think there is something in that. Of course there are many other tools out there besides VLEs, and I was quite impressed with this attempt to incorporate some of them into Blackboard, posted by a contributor to that debate.
However, for better or worse, Lincoln and many other institutions are likely to continue with some form of VLE for the foreseeable future, and as I said in the last post, I actually deconstructed a VLE site (Blackboard in this case) which had accumulated about five years worth of material. One of the first challenges in any kind of research, (and I maintain that this is a form of research), is analysis. So, bearing in mind Bourdieu’s warnings about the malleability of classes, and the way the field in which they operate tends to define them, here is a list of the classes of material I found. At first sight it reminded me a little of Borges’s Celestial Emporium of Benevolent Knowledge, insofar as it has very little in common with recognised practice in the field of education.
Powerpoint slide sets used in lectures that are substitutes for lecture content
Powerpoint slides designed for use in class discussions
Word/Pdf Documents designed as handouts
Word/Pdf Documents which are drafts of articles
Word/Pdf Documents which contain downloaded articles
Web links to Open Source journal articles
Web links to journal articles on publishers sites that have copyright clearance
Web links to journal articles on publishers sites that do not have copyright clearance
While it looks as though I have emphasised form and function over content here, that’s partly to make the point that form and function tend to dominate technological discourse. I did also give each item up to three subject based keywords, and the new site is in fact organised by topic because I thought that would be of more interest to the students. But I thought the form listing was interesting too because inherent in it are quite a lot of assumptions about what is helpful for student learning. Yes, there’s a variety of forms, but is the same content available in each form? (No, of course not. Though it should be, if only to promote accessibility.)
Form is important in technology. Not everyone can access Word 2010 documents for example, and certainly not everyone has access to broadband sufficient to download video clips. What does the existence of broken, out of date and copyright infringing material, (which, let me reiterate, has all now been removed,) tell us about our attitude to providing this material? This is one site in one department in one University, but I’ll bet it’s not atypical. What I would really like to do is a set of multiple case studies of sites in different institutions and different disciplines. The purpose of doing so, as with any case study, is not to generalise, but to learn from what other people are doing and improve practice. Yes, sometimes that will involve being critical (constructively) of practice, but case studies can just as easily uncover excellent and otherwise hidden practice. While the last couple of posts may sound as though I’m very critical of the site as it was previously conceived, I do think it made a lot of good and useful material available to students. (They just couldn’t find it!).
As I say I think it would be really useful to do some research into this on a wider basis, but there’s an obvious methodological challenge. Since I don’t have access to sites anywhere other than Lincoln, if I ask for participants, there’s an obvious risk of only being given access to sites that participants want me to see. On the other hand extreme cases can be very informative in qualitative research. That’s a discussion for the research proposal though. On the basis of this case, I think though that there is an argument to be made that it is too easy for function to follow form, and for both of them to overshadow content in VLEs, and perhaps in e-learning generally.
It occurred to me the other day that we I have been working with VLEs (Virtual Learning Environments) in one form or another for getting on for two decades now, and during those twenty years endless articles and books have been churned out on e-learning. I had been going to write something about how technology has transformed educational practice, but actually I don’t think it has, or not, so far, by very much. My role has, historically, been one of “supporting” academic colleagues with this technology, but it was only recently, when I became a programme leader, (with responsibility for my very own module) that I began to think about what kind of support would be useful to me. I’d be the first to admit that I am probably something of a special case. I know our VLE inside out and am very comfortable with the technology. I realise that not everyone shares that knowledge or comfort, so this is inevitably something of a personal take.
Nevertheless, it didn’t take many interactions with actual students to make me realise that the approach to e-learning we had been taking on the doctoral programme I studied, taught on, and am now leading, wasn’t really meeting their needs. (Come to think of it, as a student I hardly ever used the VLE myself). Let me say now, that this is not going to be a normative piece laying down the law about how VLE sites should be structured. I’m sure Lincoln’s doctoral students have their own unique set of needs, and these will be very different from say the needs of undergraduate students in other disciplines and at other universities. That said, to go back to the issue of support I started out with the idea that people needed to get a hold on how the technology works. I suppose they do, and in fairness, that was often the focus of requests for support. (Still is!) And that is what we, as educational developers have, by and large, provided, relying on the creativity of colleagues to do something clever with it. I suppose where we have fallen short is that we haven’t really built on that foundation. Having swapped my educational developer hat for an academic hat, I can see why. It’s really challenging to completely redesign a VLE site to match what the students say their needs are. At a programme board last year I reacted to student criticisms of what was provided for them on the VLE by blithely announcing that I would completely redesign it, thinking it would take a few weeks at most. It took six months, and detracted from quite a lot of things I was supposed to be doing, like, er research. Even now, even though the redesign has been launched, and seems to have been well received I’m acutely conscious that I’ve hardly begun to scratch the surface as far as things like learning activities for the students are concerned. Most of the work I have done so far is simply about providing a structure for the various teaching materials that I and other colleagues have provided, along with a little bit of cosmetic work on the menu and home page.
While I said I haven’t been doing research, I do think this exercise has given me the foundations of a theoretical framework for thinking about the contribution VLEs can make to a course. Clearly, if a VLE is to meet the needs of students, there has to be quite significant engagement with both the students and with the colleagues who are teaching on the programme. That’s not particularly original. Sharpe & Oliver (2007) make much the same point. Secondly, I think there is a need to think about what sort of contribution the VLE can make to students’ learning. Clearly, the best VLE in the world is no substitute for the University library. Yet, in the exercise I have just completed I counted around 400 “learning items” which had been generated over the last five years. These included PowerPoint slides, Prezis, and handouts from teaching sessions and guest lectures, podcasts, videos, and quite a few journal articles that (ahem) didn’t appear to have appropriate copyright clearance. (Those have all been removed now.) On top of those there was a whole range of what might be called regulatory documents such as programme handbooks, ethical approval forms and assignment submission sheets. Clearly that’s a significant and useful resource, but on its own it’s not anything like adequate for doctoral, or even, some would argue undergraduate, study. Even having imposed some sort of structure on all this material, which is really all I have done in the redesign, I’m still not sure where to go next. What learning activities are appropriate? Why? How do I design them? Do I limit myself what the technology offers? (A fairly obvious danger in simply “training” colleagues to use the technology)
So this raises the question, what exactly is a VLE for? Maybe that’s better phrased as “what is it not for?” Students, at least in surveys at Lincoln have often said that they want “consistency” in the way staff use the VLE. Well, yes, but I think there has to be a general agreement about what we can reasonably expect of a VLE. There is clearly a tension between this desire to meet students’ legitimate expectations and the kind of academic freedom that these technologies allow. It doesn’t seem reasonable to me to expect e-learning to take the same form in, for example, modern dance that you would find in chemical engineering. Equally, it could be argued that providing students with material through the VLE detracts from the important skill of literature searching, whether that’s done in a library or through a Google search. Even more importantly, providing them with “all the resources they need”, even if it were possible, is unlikely to encourage them to develop a critical engagement with the literature.
Where does that leave us then? After nearly 20 years of using VLEs have we just ended up with an expensive, badly organised repository of content of dubious value? In some cases undoubtedly, though it would be quite wrong to think that all VLE sites fell into that category. There is some excellent work out there. I’ve been to plenty of conferences where I’ve seen good, innovative and creative practice, and I know from my support role that many colleagues at Lincoln are pushing the boundaries in quite imaginative ways. The challenge is to spread this kind of practice, bearing in mind that such innovation is risky even if the major risk is that academic staff devote more time to their students than to their research. (After all you might not get that grant bid in, or that journal article submitted, and since the teaching grant disappeared in the humanities and social sciences that is by no means a small risk). I do think though that there is a case for more detailed research into what academics actually do in terms of course design with a VLE. But that’s for another post.
Reference
Sharpe, R & Oliver, M (2007) Designing courses for e-learning in Sharpe, R & Beetham, H. (eds). Rethinking pedagogy for a digital age: Designing and delivering e-learning. – Routledge, London (pp41-51)
.
This is by way of a bit of self development. The university library has introduced a new piece of software called Talis Aspire, designed to make reading lists a little easier, and I was wondering how best to introduce it to colleagues who are less than enthusiastic users of technology. I’ve also been thinking for a while that it ought to be possible to make reasonable quality videos using simple tools – in this case PowerPoint 2010.
This is very much a first attempt. I realise the text is very small, and there’s no sound at this stage because I wanted to keep the file size low, and anyway, I didn’t have a lot of time. Depending on how well recieved this is, I may well develop a more accessible version later on. (Any volunteer voice actors out there with a few minutes spare time? I envisage a male/female conversation, but it’s not essential). Anyway, here’s the “proof of concept”
A colleague has drawn my attention to this excellent post over at Posthegemony. The author argues, quite reasonably that a lot of the on line learning models draw from the sciences and are thus unsuitable for use in the humanities.
Now, I’d be the first to accept that I’m a bit of a sceptic with regard to technology. I use it alot, and I couldn’t really imagine life without Google, (even if they are telling the CIA what I’m up to every day) or the myriad of apps I use (I’m rather taken with the neat “to do” list app from Todoist.com at the moment, and I love playing with Mapmyride) but I don’t see tech as a panacea. I’d go further than Posthegemony though. I’m not sure that the multiple choice assessment model is all that helpful in the sciences either, at least not in terms of increasing understanding. I’ve been taking part in a MOOC (Well, in a pilot of one), and one of the tasks was to evaluate another course. I chose the Khan Academy Maths course.
While I do quite like the presentation style of the Khan videos I’ve seen, the tests it provides seem to do little more than test arithmetical accuracy. Of course accuracy is important in science but it’s no use without understanding the underlying conceptual structure and there’s no way (that I can see) of acknowledging that a student has understood a concept, or even partially understood it. The real skill in teaching, in my view is getting students from that state of partial understanding to complete understanding (and yes, “bigging up” the students own role in that journey.)
I think computers are long way from that yet, and what we’re seeing is a “bubble” along the lines of the South Sea Bubble, or Tulip Fever of centuries past. I do not doubt that when someone develops software that can display the sort of empathy that a human teacher has, that is the ability to analyse what is wrong and why, as opposed to merely seeing that something IS wrong, we will indeed see considerable profits from on-line education. But I don’t think we’re anywhere near that yet.
In recent weeks I’ve been involved in the pilot of an online course delivered via the University’s VLE. The course is called “Teaching and Learning in the Digital Age” and it’s time for our first output. We’re to upload a poster outlining the advantages and disadvantages of digital learning. But I sometimes like to think outside the box. So here’s one I thought of that I’m NOT going to submit for assessment.
You may wonder what that has to do with the exercise. Well, in my view, Donne, writing long before anyone had ever heard of a computer nailed a fundamental problem of digital learning. He was (as poets are wont to do) talking about love, and its associated joys and pains, and how by versifying he could simplify and “fetter” those emotions.
If we substitute “knowledge” for “love” it strikes me that there’s a parallel. It is possible to reduce knowledge to “numbers”, to fetter it in some kind of digital chain. That can be convenient, certainly, sometimes it might even be necessary, but the act of doing so means that it loses something essential to itself and, as Donne found to his annoyance it needs to be, and inevitably will be, set free again if it is to “delight many”, if “whole joys” are to be experienced. (That last phrase is pinched from another Donne poem, wholly unsuitable for a respectable blog like this). So my poster can be summarised thus:-
The advantage of digital learning is that it can render knowledge into a convenient package.
The disadvantage of digital learning is that knowledge does not fit into a convenient package.
I plead guilty to exactly the kind of oversimplification that I’m criticising. I know that there is much more to digital learning than just “packaging knowledge”, but I don’t think it is really possible to talk about “digital learning”, or any sort of learning outside the context of the disciplines and without some sort of commitment to a critical pedagogy. The sort of knowledge we deal with in universities doesn’t lend itself to being reduced to numbers, or for that matter to lists of advantages and disadvantages. Instead it invites questions of itself about what it is for, and about the dialectic between the knower, the knowledge, and the discipline. Or it should.
Anyway I like the rhythm of the poem, and if this post gets one more person enjoying Donne I’ll think writing it an hour well spent. And yes, he’s just as entertaining online as he is in print.
I’ve been wondering about the phrase “digital scholarship” recently and what it might mean. I think we have to start by asking why is “digital scholarship” different from “scholarship”? (If it isn’t different, why are we worrying about it?). For me, scholarship is the attempt to conceptualise and theorise about the real world. That’s not quite the same as “learning”, although that is an important subdivision. My definition presents a problem though, since, as far as science has been able to discover, the only place conceptualisation and theorisation can occur is inside the human mind. So what then is “digital scholarship?” These are very much preliminary thoughts but three elements leap out at me, namely metadata, critique, and accessibility.
Scholarship has always been dependent on inputs and outputs, whether oral stories, printed or written texts, plays, experiences or, video productions. It is these that computers are excellent at managing, and I think that is where the notion of “digital scholarship” has arisen – but that’s not quite the same as my definition of scholarship. Yes, computers offer tools that scholars can use to work together to combine their work, but a blog entry, a wiki page, or even a tweet is still a snapshot of the state of a human mind at a given point, and that mind is, even as it is writing, simultaneously considering, accepting, rejecting or developing concepts.
So what is it that makes “digital scholarship” scholarship? (We can all agree that computers make scholarship “digital”, so I’m going to take their role as read in this post, although there’s clearly a lot of innovative work that can be done with them, and there might well be a scholarship emerging in this area.) In then end, I think it comes down to the outputs and inputs (although any artefact, digital or not, can be both input and output depending on how it is used). But an important quality of these artefacts is that they are inanimate. That means that they can only be accessed through metadata. Yes, they can be changed, but unlike a mental concept they are not changed by immediate experience, nor can they question themselves. That means digital scholarship must concern itself with this, since digital artefacts are less visible. The sort of intellectual serendipity you get from wandering around a library is hard to replicate in a digital environment. There are some metadata tools (e.g. RSS, and some of the work going on around games, virtual worlds and simulators) that are qualitatively different from analogue tools and can alert users to relevant concepts, but they’re still reliant on an accurate description of the data in the object itself, which leads into the second of my three aspects, critique
There is a second quality about digital artefacts, and one that is easy to miss. Much as we like to convince ourselves that they, especially “open” ones, are free, they are, and will always remain commodities and as such still subject to the exchange relations inherent in a capitalist society. No-one would seriously dispute that a book has an exchange value, however small.. Digital artefacts too are the product of human labour and thus contain an element of the surplus value that that creates, albeit arguably much smaller .While money might not be being exchanged for that value, that doesn’t negate the argument. The things being exchanged are among other things, the author’s reputation, further labour potential to improve the product and of course a small element of the labour inherent in the production of the hardware, software and server time necessary to use the artefact. Essentially an open digital artefact is an unfinished product. Which means the second area of concern for digital scholarship is critique. Critique has always been an essential aspect of scholarship, but the absence of peer review in digital production makes it much more important.
Finally, there is the question of accessibility. By “accessibility” I mean the ways in which users use a technology. Clearly, there are some physical, social and economic barriers to using a technology, and any serious scholarship has to concern itself with the potential for exclusion that those barriers present. But digital technologies also offer multiple ways of doing things and human beings, being what they are inevitably find ways of using new tools to do things in unexpected ways, ways which may appear “wrong” to the original provider. (It is, of course true that sometimes these ways of doing things are actually wrong – in which case the user has to either seek assistance or give up) Since learning often involves making mistakes, I’m not comfortable with phrases like “digital literacy” if that means “doing things in the way we, as content producers, think they should be done”. So I’d argue that digital scholarship has to concern itself with the issue of accessibility in its broadest sense, much as traditional scholarship has concerned itself with issues of publication.
So, for me digital scholarship shares many values and ideas with traditional scholarship but places a greater emphasis on how ideas are described and accessed, and needs to be even more sceptical of the validity of expressed ideas, and much more involved with how people use technology. The next question to ask is what that would look like in practice. But that’s for another day.
First up was Neil Ringan, from the host university talking about their JISC funded TRAFFIC project. (More details can be found at http://lrt.mmu.ac.uk/traffic/ ) This project isn’t specifically about e-submission, but more concerned with enhancing the quality of assessment and feedback generally across the institution. To this end they have developed a generic end to end 8 stage assignment lifecycle, starting with the specification of an assessment, which is relatively unproblematic, since there is a centralised quality system describing learning outcomes, module descriptions, and appropriate deadlines. From that point on though, practice is by no means consistent. Stages 2-5; Different practices can be seen in setting assignments, supporting students in doing them, methods of submission, marking and production of feedback. Only at stage 6, the actual recording of grades, which is done in a centralised student record system does consistency return. Again we return to a fairly chaotic range of practices in stage 7, the way grades and feedback is returned to student. The Traffic project team describe stage 8 as the “Ongoing student reflection on feedback and grades”. In the light of debating whether to adopt e-submission, I’m not sure that this really is part of the assessment process from the institution’s perspective. Obviously, it is from the students’ perspective. I can’t speak for other institutions, but this cycle doesn’t sound a million miles away from the situation at Lincoln.
For me, there’s a 9th stage too, which doesn’t seem to be present in Manchester’s model, which is what you might call the “quality box” stage. (Perhaps it’s not present because it doesn’t fit in the idea of an “assessment cycle”!) I suppose it is easy enough to leave everything in the VLE’s database, but selections for external moderation and quality evaluation will have to be made at some point. External examiners are unlikely to regard being asked to make the selections themselves with equanimity, although I suppose it is possible some might want to see everything that the students had written. Also of course how accessible are records in a VLE 5 years after a student has left? How easy is it ten years after they have left? At what point are universities free to delete a student’s work from their record? I did raise this in the questions, but nobody really seemed to have an answer.
Anyway, I’m drifting away from what was actually said. Neil made a fairly obvious point (which hadn’t occurred to me, up to that point) that the form of feedback you want to give determines the form of submission. It follows from that that maybe e-submission is inappropriate in some circumstances, such as the practice of “crits” used in architecture schools. At the very least you have to make allowances for different, but entirely valid practices. This gets us back to the administrators, managers and students versus academics debate I referred to in the last post. There is little doubt that providing eFeedback does much to promote transparency to students and highlights different academic practices across an institution. You can see how that might cause tensions between students who are getting e-feedback and those who are not and thus have both negative and positive influences on an institutions National Student Survey results.
Neil also noted that the importance of business intelligence about assessments is often underestimated. We often record marks and performance, but we don’t evaluate when assessments are set? How long are students given to complete? When do deadlines occur? (After all if they cluster around Easter and Christmas, aren’t we making a rod for our own back?) If we did evaluate this sort of thing, we might have a much better picture of the whole range of assessment practices.
Matt’s main point was that staff at Exeter were strongly wedded to paper-based marking arguing that it offered them more flexibility. So the system needed to be attractive to a lot of people. To be honest, I wasn’t sure that the tool offered much more than the Blackboard Gradebook already offers, but as I have little experience of Moodle, I’m not really in a position to know what the basic offering in Moodle is like.
Some of the features Matt mentioned were offline marking, and support for second moderators, which while a little basic, are already there in Blackboard. One feature that did sound helpful was that personal tutors could access the tool and pull up all of a student’s past work and the feedback and grades that they had received for it. Again that’s something you could, theoretically anyway, do in Blackboard if the personal tutors were enrolled on their sites (Note to self – should we consider enrolling personal tutors on all their tutees Blackboard sites?).
Exeter had also built in a way to provide generic feedback into their tool, although I have my doubts about the value of what could be rather impersonal feedback. I stress this is a personal view, but I don’t think sticking what is effectively the electronic equivalent of a rubber stamp on a student’s work is terribly constructive or helpful to the student, although I can see that it might save time. I’ve never used the Turnitn rubrics for example, for that reason. Matt did note that they had used the Turnitin API to simplify e-marking, although he admitted it had been a lot of work to get it to work.
Oh dear. That all sounds a bit negative about Exeter’s project. I don’t mean to be critical at all. It’s just that it is a little outside my experience. There were some very useful and interesting insights in the presentation. I particularly liked the notion of filming the marking process which they did in order to evaluate the process. (I wonder how academics reacted to that!)
All in all a very worthwhile day, even if it did mean braving the Mancunian rain (yes, I did get wet!). A few other points were made that I thought worth recording though haven’t worked them in to the posts yet.
• What do academics do with assignment feedback they give to theire current cohort? Do they pass on info to colleagues teaching next? Does anybody ever ask academics what they do with the feedback they write? We’re always asking students!
• “e-submission the most complex project you can embark on” (Gulps nervously)
• It’s quite likely that the HEA SIG (Special Interest Group) is going to be reinvigorated soon. We should joint it if it is.
• If there is any consistent message from today so far, it is “Students absolutely love e-assessment”
Finally, as always I welcome comments, (if anyone reads this!) and while I don’t normally put personal information on my blog, I have to go into hospital for a couple of days next week, so please don’t worry if your comments don’t appear immediately. I’ll get round to moderating them as soon as I can
On Friday I returned to my roots, in that I attended a workshop on e-submission of assignments at Manchester Metropolitan University, the institution where my professional career in academia started (although it was Manchester Polytechnic back then). The day was a relatively short one, consisting of four presentations, followed by a plenary session. That said, this is a rather long blog post because it is an interesting topic, which raises a lot of issues so I’m splitting it into two in order to do it full justice. I’m indebted to the presenters, and the many colleagues present who used their Twitter accounts for the following notes (if you wish to see the data yourself search Twitter for the #heahelf hashtag).
The reason I went along to this is because there is a great deal of interest in the digital management of assessment. One person described it as a “huge institutional wave about to break in the UK”, and I think there is probably something in that. How far the wave is driven by administrative and financial requirements, and how far by any pedagogical advantages it confers was a debate that developed as the day progressed.
The first presenter, Barbara Newland, reporting on a Heads of E-learning commissioned research project offered some useful definitions.
E-submission
Online submission of an assignment
E-marking
Marking online (i.e. not on paper)
E-feedback
Producing feedback in audio, video or on-line text
While the discussions touched on all of these, the first, e-submission, was by far the dominant topic. The research showed a snapshot of current HE institutional policy, which indicated that e-submission was much more common than the other three elements, although it has to be said that very few UK institutions have any sort of policy on any aspect of digital assignment management. Most of the work is being done at the level of departments, or by individual academic staff working alone.
Developing an institutional policy does require some thought, as digital management of assessment can affect nearly everyone in an institution and many ‘building blocks’ need to be in place. Who decides whether e-submission should be used alone, or whether hard copies should be handed in as well? Who writes, or more accurately re-writes, the university regulations? Who trains colleagues in using the software? Who decides which software is acceptable (Some departments and institutions use Turnitin, some use an institutional VLE like Blackboard or Moodle, and some are developing stand-alone software, and some use combinations of one or more of these tools)
A very interesting slide, on who is driving eSubmission adoption in institutions raised some the rather sensitive question of whether the move to e-assessment is being driven by administrative issues rather than pedagogy? The suggestion was that the principal drivers are senior managements, learning technologists and students, rather than academic staff and this theme emerged in the next presentation, by Alice Bird, from Liverpool John Moores University, which seems to be one of the few (possibly the only) UK HEIs that has adopted an institution wide policy. Their policy seems to be that e-submission is compulsory if the assignment is a single file, in Word or PDF format and is greater than 2000 words in length. Alice suggested that for most academic staff, confidence rather than competence had proved to be the main barrier to adoption. There was little doubt that students had been an important driver of e-submission, along with senior management at Liverpool One result of this was a sense that Academics felt disempowered, in that they had less control over their work. She also claimed that there had been a notable decline in the trade union powerbase relative to the student union. Of course, that’s a claim that needs unpicking. It seems to me that it would depend very much on how you define “power” within an institution, and the claim wasn’t really backed up with evidence. Still, it is an issue that might be worth considering for any institution that is planning to introduce e-submission.
Although there were certainly some negative perceptions around E-submission at Liverpool, particularly whether there were any genuine educational benefits, Alice’s advice was to “just do it”, since it isn’t technically difficult. As a colleague at the meeting tweeted the “”Just doing it’ approach’ has merits in that previously negative academics can come on board but may also further alienate some”. I think that’s probably true, and that alienation may be increased if the policy is perceived as having predominantly administrative, as opposed to educational, benefits.
She did point out that no single technological solution had met all their needs, and they’d had to adapt, some people using the VLE (Blackboard, in their case), some using Turnitin. What had been crucial to their success was communication with all their stakeholders. Certainly e-submission is popular with administrators, but there are educational benefits too. Firstly feedback is always available, so students can access it when they start their next piece of work. Secondly, electronically provided feedback is always legible. That may sound a little facetious, but it really isn’t. No matter how much care a marker takes with their handwriting, if the student can’t read it, it’s useless. Thirdly, students are more likely to access their previous work and develop it if it’s easily available.
There are tensions between anonymous marking and “feedback as dialogue”, some tutors arguing that a lack of anonymity is actually better for the student. Other difficulties, in spite of the earlier remarks about confidence, was some confusion over file formats, something we’ve experienced at Lincoln with confusion between different versions of Word. As another colleague, suggested this is a bit of a “threshold concept” for e-submission. We can’t really do it seamlessly, until everyone has a basic understanding of the technology. I suppose you could say the same about using a virtual learning environment like Blackboard. Assessment tends to be higher stakes though, as far as students are concerned. They might be annoyed if lecture slides don’t appear, but they’ll be furious if they believe their assignments have been lost, even if they’ve been “lost” because they themselves have not correctly followed the instructions.
There was also a bit of a discussion about the capacity of shared e-submission services like Turnitin to cope, if there was a UK wide rush to use them. (Presumably it wouldn’t just come from the UK either). There have certainly been problems with Turnitin recently, which distressed one or two institutions who were piloting e-submissions with it more than somewhat!
The afternoon sessions, which I’ll summarise in the next post focussed on the experience of e-submission projects in two institutions, Manchester Metropolitan University and Exeter University.
You must be logged in to post a comment.