writing behat scripts!

Here at the OU’s learning systems development team we’ve been watching others work with Moodle and behat for some time.  I first saw it demo’d at the Perth Hackfest about 18 months ago.  We’ve experimented with automated testing with selenium and ranorex in the past, but with little success, so  we’ve been a little cautious about jumping on the behat bandwagon.

But the stars have aligned to change that.  We have recently updated our codebase to Moodle 2.7.x which heralds the usual spree of regression testing to make sure that nothing broke.  That, along with the fact that Moodle HQ are looking for behat scripts as part of integration testing now, prompted Sam Marshall to suggest that we should use this opportunity to write behat tests for our plug-ins.

It seems like a good idea to write the scripts to perform the regression testing that we’d otherwise be doing by hand.  Hopefully for 2.8.x then this process will be quicker!  So we decided to dedicate the entire team to a fortnight’s behat sprint.  People will be planning, writing and/or running automated tests all at the same time, so we can learn from each other as we go along.

Our aims for the fortnight are

  1. to generate automated behat tests for a significant proportion of OU plugins that we can use in future;
  2. to create the know-how within the team of how to create and run those tests; and
  3. to establish writing tests like this as part of what we do as part of any development.

How will we do?  Check back in a fortnight to find out!


As I mentioned in a previous post, I see software as a creative process, an art form.  So this is a post about the other creative art form that I enjoy, bobbin lace making … like this.

A small part of a bedfordshire lace edging, pattern by Christine Springett from an 18th C collar

A small part of a Bedfordshire lace edging, pattern by Christine Springett from an 18th C collar, completed 2014

One of the reasons my brain starts to fry when I’m asked about suitable metrics to measure the performance of a development team, is because I’m struggling to answer the question “how to you measure art?”  So, how do I measure my lace?

Well sometimes with a tape measure, sure … wedding garters tend to need to be about 40 inches to get the ruffles right.

But there are other ways too.  You could count the stitches but that would be a bit pointless.  We often count the number of bobbins used, and whether the threads “run” or have to be sewn together in separate sections; these are signs of the complexity of the piece.  And yes, I can count how long it takes.  A garter normally takes about 1.5 hours per inch, the piece above was worked slowly over about 2 years.  Of course, I have often an idea when I want something finished by –  I may give my lace away so the deadline is Christmas or a birthday – so hitting the target date (actual vs estimate) is sometimes meaningful.  And sometimes not – I did several other pieces in the 2 years on the piece above, because they had a higher priority.  Does that mean the edging is “bad”? No.

You can also count the number of times I rework parts of a pattern to get it right.  Or the number of defects left in the finished article.  And you can turn it over to look at the back, and see how neatly it is finished (like doing a code review).

The things that are most important to me are more subjective … Does it have “wow” factor?  Does it do what I (or the person receiving it) want it to do?  What do my lace-maker friends think of it? These fall more into the realm of user & peer feedback.

And finally, did I learn something?  My first attempt at Bedfordshire lace was not very good, but this piece is the culmination of 20+ years of practice.  I got better (neater, quicker, more complex, more wow) along the way.  Some learning pieces come off the pillow and languish in a cupboard because we are not proud of them.  But they were still worth it, because we learned.

When I start a piece, I know what’s going to be important.  Right now, I’m doing a nativity set for Christmas, a bit like this.  So, this is a simple lace that I’ve done many times before.  Nothing to learn.  Neatness and wow factor are important because they’ll be on display every year, as is durability.  Time is important if I want them on display this year.  The next piece on my backlog though is a first go at Flanders.  For that, time will be meaningless, neatness and wow are unlikely unless existing skills transfer readily, but learning is critical.

Does this teach us something about software metrics?  We can judge the complexity of a request, and we can do code & peer review and count defects after release.  These are all worthwhile in my view.  But I’d like to see us judge what is important at the start of the piece, and measure against that at the end.

So “add the permission control to do roll out reports to faculty” is something we’ve done many times before, nothing to learn, neatness and durability important, time critical.  But “make annotation work on PDFs and images” will be an innovation piece – a work of art. It has  no time deadline, will definitely provide wow and requires significant learning.  Does it really make sense to estimate how long creation takes and then judge ourselves against it?  I think not.

Last night I went to the valedictory lecture of Simon Buckingham Shum as he leaves the OU for the other side of the world.  Simon gave an interesting lecture about his research over the years at the Knowledge Media Institute, focussing on the knowledge mapping tools that we integrated into the first iterations of OpenLearn LabSpace.

During the presentation Simon referred to inventors with regard to KMi researchers and developers.  That word got me thinking. He talked about software as a creative process, which is something I firmly believe in – for me it is an art form.  (Look out, if I remember, for a follow-up blog post on metrics and art but that’s a tangent of what I want to say today).

I have been honoured over the years to work on many innovation projects … the first generations of intranets, accessibility in Macromedia Director, OpenLearn and web annotation.  The part I’ve enjoyed most is the invention, the creative process of working out what you want to do and then bending the computer to your will.  For me, the technical challenge is enough, provided there’s a customer, usually an enthusiastic academic, who can tell me the value of the work.

OK, so most of my projects have been greeted by “that’s a colossal waste of time”, “why would we want to do that”, “who’s going to use that” etc.  But that’s the point with innovation / invention isn’t it?  Sometimes it fails, or lasts a while then gets overtaken by the next new thing.  So my early intranets are long gone, Macromedia finally put out their own accessibility libraries, and the jury is still out on whether students are ready for web annotation yet.  For me, all these projects get boring when the invention part is over.  As they enter the mainstream, they tend to loose their inventiveness, become more risk-averse.  At that point, there’s less creativity, less art and less fun.  And a massive backlog of little things to keep you busy.

There’s been a lot of discussion here recently about innovation and risk.  I think it was an inevitable response to the recent funding changes that we tightened our belts and became much more careful where we spent our money.  It’s no bad thing that there are always people checking whether our project is a colossal waste of time or not … because it keeps us honest.

But I’d like to find a way to bring the invention, the innovation back into my life and the developers around me.  We should all have something creative and fun to do.  Even if we are reinventing the wheel, ours should be the best darn wheel out there.

I look forward to working out how we do this.  Maybe its 20% time, maybe a “dating agency” for academics with ideas and developers with enthusiasm, maybe rotation through an R&D team?  Maybe you have an idea you can share?





I went to the CETIS annual conference this week. One of the sessions that I didn’t get to was entitled ebooks: the learning platform of the future.  I needed to clone myself that morning – all of the sessions looked interesting.  So I had a quick chat over coffee with some-one that did go to that session.

Warning – when it comes to e-books, I know not of what I speak…

… and neither did the person I was chatting to.  The key message he took away was that e-books do other stuff too – media, discussion, assessment.  Isn’t that cool for a book!

My immediate reaction was “VLEs do that stuff too”.   So my naive question – why have VLEs at all, why not just have ePubs?  I guess that’s why the title of the presentation!

One obvious answer may well be that the discussion and assessment tools in e-books may not be as “good” as those in VLEs (if you disagree – remember the warning please!)  I’ve lost count of the time we’ve been told about these wonderful things where the assessment turns out to be just multiple choice with very little flexibility for feedback.

So I tweeted my initial thoughts and got back a couple of good responses:

from @tjhunt You can have a discussion waiting for the lift. You still need meeting rooms.

from @bmuramatsu still need a repo for assets & assessments–you don’t want it all in epub or vle–& a plat to collect, analyze & display results

Later in the day, reflecting on those and @audreywatters comment during her keynote about VLEs, or LMS as they’re known across the pond, being Management (that’s what the M stands for) and admin systems, I started to think about the increased demand we’re seeing for student dashboards – everything all in one place, summarised, neatly packaged, easy to find.  And at the conference it became clear that it isn’t just OU students that have that requirement.

But what does that tell us about our VLEs?  Aren’t they meant to be the spine that everything hangs off?  The one place they find their content, assessment, grades, discussions?  If they want another place for that stuff, are they really saying the the VLE isn’t good enough?

How should that change the way we build the next generation of (better integrated, personalised, flexible …) systems for our students?

… when it’s definition!

OK, so I’m not very good at jokes. More accurately, this is a question of what sort of types or classes of annotations exist?

In OU Annotate, we use “annotation” as a generic term.  Annotations are either bookmarks i.e. about a whole page or highlights i.e. about a specific section of a page.

Looking into the commonly used terms for comments and tags in our system, we can see that our students are using some shared taxonomies for their annotations.  Some of this may be dictated by their courses, such as Describe, Understand, Enact in DD206.  However, we can draw out a number of terms which can be seen as more generic types of annotation.  Such as…

  • Advantage / Disadvantage
  • Concept / Key word
  • Conclusion / Summary
  • Definition / Key Word
  • EMA __ / TMA __ / iCMA __
  • Example / Evidence
  • Important / Useful
  • Key quote/ reference/ word/ point
  • Qualitative/ Quantitative
  • Question / Answer
  • Theory
  • To do
  • Yes / No

We’ve been thinking about supporting these more effectively within the system.  Imagine you’d say that your annotation raises a Question, and maybe there’s a big ? icon everywhere and you can search to see only your question annotations.

This isn’t very difficult from a programming point of view, but the tricky part is deciding on the types of annotation.  We’ll probably do something to simplify the above list.  I’m not sure “To do” really deserves a place, and “EMA” etc should be replaced with something that doesn’t include OU acronyms for assessment methods.

My question to you, dear reader, is what do you think would be useful as a “type” of annotation?  Is this list a good start (after all we can modify it later if people don’t use them)?  Is there some academic protocol for this sort of thing that I don’t yet know about?

Every now and then I take a look at other web annotation tools and see how OU Annotate stacks up.  One of the new features I’ve noticed creeping in to some tools is to support the creation of references, bibliographies and other such study tools.

Seems like this would be a good feature for OU Annotate and I’d like to take a closer look.

But the thing is, I’m not “academic”.  OK, I have a computer science degree but it involved lots of writing of code and not many essays, which means I’m not really that clear on what students would want from their notes to support them in this area.  I did find this page from the Warwick university library helpful on the difference between citations, references and bibliographies, which was very helpful.

I can see that it would be great to gather a set of annotations and export them as references / bibliography (is there any real difference in this context?).  I wonder if it would be useful to be able to get a structured reference for a single annotation?

How useful would it be to get a structured citation for a single annotation, or set of annotations?  After all, the system knows the fragment of text the annotation refers to, so it could generate “Gray (2014) said ….”.  But perhaps people would want to do that for themselves?

And what about referencing an annotation?  This could get like looking in a mirror in a mirror… would you want to cite that Fred said x in his annotation about Jane saying y on webpage z?

Would you need anything “extra” on an annotation to remind you that you wanted it as a reference? Or am I over-thinking this and it’s just an export format that applies to any annotation?

So I was hoping that my readers could help me.  Perhaps you could point me at a good explanation of how students gather references etc for essays, or other stuff they’re recommended to do with written notes?

All ideas on the subject welcome!

… that seems to sum up the past 8 years of my life!  Perhaps I ought to go.


But I have a few reservations (before even considering whether my employers would fund it and whether my husband will let me trot off across the globe again without him), so I thought I’d write about them and see if I could get some wider input.

One of the reasons this workshop tempts me is that the organizers are very interesting.  Lumen Learning, co-founded by the inspirational David Wiley, aims to facilitate broad adoption of OER.  Hypothes.is plans to be an open platform for annotation, currently in closed beta.   I know that I could learn a lot to influence the direction of our annotation project here at the Open University from these people, we’re particularly interested in extending academic tools such as bibliography and reference extraction into a future release, classifications of annotation (question, quote, rebuttal..).  I’d like to see whether Hypothes.is could be an alternative foundation for OU Annotate at some point.   OU Annotate is currently a PHP and jquery tool based on the SilverStripe framework, using Rangy for locating annotations on an HTML page.  We have aspirations to annotate PDFs, ePubs, images … at some point too.

But this workshop is to be a very small group with specific goals and I’m not confident that I can be as helpful as perhaps others could be – and attendance needs to be a two-way process, give and receive.  My caution to put myself forward comes from two places.

First, I don’t have much (any) influence over OER at the OU any more.  OK, I still know the people involved, and the technologies the systems are based on, and I can still get my hands on some of the codebase for prototyping but other parts of the codebase are beyond my touch and I couldn’t guarantee to get any prototypes into production.

Second, OU Annotate is a closed-source, closed annotation system at the moment.  We did once plan to open-source it, but things have gone quiet on that front.  And although the technology supports sharing annotations publicly, at the moment “public” means the OU community rather than the whole world.

So, what do you think?  Could I be helpful or would I just be taking up a space?


Get every new post delivered to your Inbox.

Join 283 other followers