… that seems to sum up the past 8 years of my life!  Perhaps I ought to go.

http://oerannotation.wordpress.com/oer-annotation-summit-2014/

But I have a few reservations (before even considering whether my employers would fund it and whether my husband will let me trot off across the globe again without him), so I thought I’d write about them and see if I could get some wider input.

One of the reasons this workshop tempts me is that the organizers are very interesting.  Lumen Learning, co-founded by the inspirational David Wiley, aims to facilitate broad adoption of OER.  Hypothes.is plans to be an open platform for annotation, currently in closed beta.   I know that I could learn a lot to influence the direction of our annotation project here at the Open University from these people, we’re particularly interested in extending academic tools such as bibliography and reference extraction into a future release, classifications of annotation (question, quote, rebuttal..).  I’d like to see whether Hypothes.is could be an alternative foundation for OU Annotate at some point.   OU Annotate is currently a PHP and jquery tool based on the SilverStripe framework, using Rangy for locating annotations on an HTML page.  We have aspirations to annotate PDFs, ePubs, images … at some point too.

But this workshop is to be a very small group with specific goals and I’m not confident that I can be as helpful as perhaps others could be – and attendance needs to be a two-way process, give and receive.  My caution to put myself forward comes from two places.

First, I don’t have much (any) influence over OER at the OU any more.  OK, I still know the people involved, and the technologies the systems are based on, and I can still get my hands on some of the codebase for prototyping but other parts of the codebase are beyond my touch and I couldn’t guarantee to get any prototypes into production.

Second, OU Annotate is a closed-source, closed annotation system at the moment.  We did once plan to open-source it, but things have gone quiet on that front.  And although the technology supports sharing annotations publicly, at the moment “public” means the OU community rather than the whole world.

So, what do you think?  Could I be helpful or would I just be taking up a space?

… or the coding equivalent to wearing your best undies in case you get run over and a doctor sees them (at least that’s what my gran always used to say!)

Back in the dim and distant past we thought we might open source OU Annotate.  We’ve never done it, for a number of reasons – some political, some resource, and to a limited amount some technical.  I still hold out the hope that our work might be useful to some-one else one day, but this isn’t something we’re discussing really at the moment internally.

So, when I make a coding change that would be of benefit to an open-source community around OU Annotate, it could be seen as the coding equivalent of wearing your best undies.  In the main, no-one but you is going to see them and anyway it makes you feel better that everything is lovely.  I’ve probably stretched this metaphor about as far as it’ll go … the other software reason for doing this sort of thing is because it will make things easier for you to maintain in future as well.

Recently I added language pack support to the bookmarklet.  I posted ages ago some thoughts about how to do this, so I thought I’d share what I eventually did.

1.  I declared a global default language, and set it to en_GB.  This is done in the defaults definition script that also has things like the base url in.

2.  Where all the javascript libraries are included in the bootstrap, include a lang/xxx.js file where xxx is the globally defined default language.

3.  When the user details are pulled across from the annotate database, include the language defined in their profile.  Just before the annotations are being loaded onto the screen display, if the language differs from the system default, include the appropriate lang/xxx.js file for their language.  By doing this before any screen display, the first things the user sees will be in the correct language, and we’re using an existing AJAX loading() deferred period.

4.  I defined a new global function _t which takes a language string identifier as parameter, e.g. _t(“TOOLBAR_SPLASH”).  This function looks up and returns the appropriate string in the language pack file.  A second parameter can pass through an array of variables to be replaced within the body of the string.

5. The language pack file is a simple javascript associative array.  If a variable is needed, the placemarker {a[0]} is used.  Here 0 is the index of the item to be used from the _t function’s variables parameter.

6.  Painstakingly identify and convert all strings littered through the bookmarklet code and views (its MVC) to use the new function and put the strings in the language pack.

Obviously much of this is shamelessly copied from the way Moodle, or indeed Silverstripe’s internationalisation works.  One thing I prefer about Moodle is that it has a fall-back to English option if a specific string is missing in the alternative language pack.  I didn’t bother coding this into my bookmarklet javascript i18n support because Silverstripe doesn’t have it for the main OU Annotate system.

One extension could be to control all the language strings for both the main OU Annotate system and the bookmarklet from one language pack.  I did consider doing this.  Basically you’d add:

7.  Load the content of the bookmarklet javascript file through and API call back to the main OU Annotate system.

I decided against this because

a) its another AJAX call between the user and the server which takes time

b) if something goes wrong with the AJAX calls, I can’t give errors in the appropriate language

c) there’s actually very little language string sharing between the two parts of the system.

This is probably the only change that will be included in our June release, but from then OU Annotate will be out there with her new undies on and hopefully you’ll never even notice!

… boring, but worth doing in the end.  Maybe.

This week Moodle 2.6.2 was released, so one of my colleagues is merging it into the OU moodle codebase.  Last week Silverstripe 3.1.3 was released, so I’ve been merging it into the OU Annotate code base.

Its not thrilling work, but it is probably a good idea and you feel a sense of smug fulfilment when its done (no? – maybe that’s just me then!).

You throw away things that you definitely don’t need or shouldn’t keep. Shockwave file uploader or 5 yr old custard powder anyone?

But there’s always the risk that you’ll break something by accident or throw something away that turns out to have been useful. Oops – where did that unit test memory limit increase / bottle of worcester sauce go? 

Its the risk that worries me and makes me wonder how I should judge whether to take an upgrade or not. 

Confession time – the main reason I feel like this is that I introduced a bug in Annotate in December as a result of the 3.1.2 Silverstripe update that locked some students out from their notes.  And I feel bad about that.  If I hadn’t taken the update, then they wouldn’t have had the bug.  I had spent ages merging code and running tests, supported by our test team.  Just think what new shiny thing we could have built if we hadn’t bothered!

Similarly, this month I had to back out of an upgrade to the text-range library that Annotate uses to detect where a user’s highlight goes because it had introduced an IE8-only bug in its latest update. 

We held a retrospective about the first issue, and in the end the decision was that my judgement was sufficient on whether to take an upgrade or not, but that we could beef up our test script to catch the specific error that had occurred should something like that happen again. Of course it never does – next time the bug will be in some other esoteric area that we don’t often test.  Luckily we caught the second one during testing and it has now been resolved by the library’s maintainer so I can take the next release of that one.

Good test scripts and various forms of automated testing can definitely increase confidence when upgrading the libraries your system, as can good processes for marking customised code, but how do you judge if an update is worth it?

For me there are a number of factors:

  • keep up to date with security patches even in bits of the system you don’t use
  • take an upgrade every now and then (6-12 months) just to make sure you stay on a maintained path
  • is there a new feature that we really want

It isn’t common for there to be a new feature in Silverstripe that I really want, because we really only use it as a code framework.  I suppose that’s the heart of my dilemma, and it is very different to our attitude to Moodle. 

So should I just stick with 3.1.3, apply any security patches and leave the rest?  Or should I keep updating “just in case” and work on my processes for checking the incoming changes and ensuring good test coverage? 

What would you do?

 

I just spent an entire day worrying away at a complex bug, when a single word in a comment sparked a thought, sent me off in a different direction and solved the problem in under an hour.

Its been said before, and it’ll be forgotten again so for the record …

Comments are awesome

The bug was in OUAnnotate, but it could have been anywhere.  The problem was that when you have multiple annotations in the same place, and you click the settings button, the settings panel for the last annotation in the pile opens, not the panel for the annotation you wanted to change.

Each panel has its own object, so code like myComment.setState(‘Active’) should be very specific to the object.  How could it possibly be affecting the wrong one?

It was a mess. I was seriously considering rewriting the whole state management part of the system.  Yuck.

So I started stepping through line by line.  This is also new to me, so as an aside, if you don’t use it already, it is definitely worthwhile learning how to use the Chrome Dev tools and the javascript debugger.

Along the way I noticed the comment “replace myComment with one for the current annotation”.

Replace.

That one word clinched it.  I quickly saw that myComment was in scope at too high a level, so there wasn’t one per comment, but one per location.  And I could easily alter the scope by moving the definition inside the loop for each comment, and after a good amount of testing everything seems to be back to normal.

So the moral of this tale is to remember that comments can help you and those who come after you to clarify what is meant to be happening.  Whenever you do something a bit fiddly, don’t forget to comment it.  Especially if it relies on code elsewhere in the file/system so you can’t see the two in one place. 

Oh I know code should be self-documenting.  Yes that’s important too.  There shouldn’t be as many comments as lines of code, and comments like  $i++; // increment $i are useless.

But to those developers reading this (and me to force me to remember) anything a bit complex needs a comment.  And the words need to be useful.  Replace, not Set up or Declare in this instance made the world of difference.

They say it is good to walk a mile in some-one else’s shoes.  Recently I’ve been involved in a requirements gathering exercise to provide new services for developers in our office.  It has been an interesting and challenging experience which I thought would be worth reflecting upon.

One of the bits of my job which I enjoy the most is to gather user requirements and draw them together into a functional specification.  Part of that process which I feel is important is to challenge where requirements or ways of working seem less than ideal.  The introduction of a new, or improved, service is a perfect opportunity to improve business practices.

Funny how that doesn’t feel quite so nice when you’re on the recieving end.

So I wanted to reflect upon why I feel so tense about this, and how I can try to ensure that other projects where I’m gathering requirements don’t make my client feel bad.

Part of this is good interpersonal skills.  It is important to really think about how you say things to get your message across and to actively listen to what’s being said.  I did have a few other ideas. I’m not sure any of this is rocket-science though.

  • Time:  The project I’m working on is under considerable and increasing time pressure.  It is however important to ensure there is enough time for the requirements analysis phase so all stakeholders feel consulted.  Otherwise you risk the wrong solution.  And rushed people tend to respond with hostility.
  • Scope:  Sometimes those odd practices are working round problems that exist elsewhere.  If they can’t be addressed by the current project, then they simply have to be acknowledged and a best fit solution proposed.  This isn’t any-one’s fault but tends to make people feel defensive, which isn’t a great working environment.
  • Terminology:  Sometimes there are phrases that you’d think both sides would understand in the same way so you assume clarity.  It is worth sanity checking that, because it isn’t always true.  Getting a clear understanding of what people mean can save problems in the long run.
  • Meetings:  Written communication tends to increase confusion.  Nothing beats getting face-to-face to thrash out contentious areas quickly.
  • Willingness to compromise:  Like all clients, I think I’ve come to this feeling that I don’t want to change and that what we do is fine.  That’s human nature, and its something that those challenging working practices and requirements need to bear in mind.

What do you think?  What else makes for a good, collaborative working environment for project scoping and shaping that leaves every-one feeling valued and respected but makes changes for the common good?

It is good to be reminded about how learners see our systems sometimes.  Honestly!

For the last week, my husband has been working through a wide selection of e-learning materials for his company.  His boss sees his progress and grades so apparently it is important that he passes every module.  Here are some of the comments I’ve heard him mutter about:

  • I can’t log in
  • Why does the pop-up blocker keep kicking in?
  • I want to go back and start the next module but I have to click so many times, its annoying.
  • That answer was right, why did it say I was wrong?
  • That feedback doesn’t go with the answer I gave.
  • Where did my grades go, why can’t I see them?
  • Why didn’t my progress track?
  • Why did it tell me it tracked on one section and not on the next one?
  • “I’ve been here for 7 years, fixing bikes for 30+ and that’s just plain wrong!”

The thing is, that when he started this rant, we both made comments about how they should use a decent learning system, and you don’t even have to pay for one of those if you want.  After all, Moodle has again been voted best LMS in Top 100 Tools for Learning survey.

So you can imagine my horror when on Saturday morning he finally showed me this travesty of a system.  Oh dear.  Moodle.

Actually, I think it turns out to be Totara, because I found a few links with that in the name.  There’s certainly some custom code in there that I’ve never seen before. 

I decided that I had a vested interest in this system and I wanted to work out where the problems really were.   After all, if there are bugs they should be reported.  My husband has already complained about a lot of these issues, but I could perhaps help pass on some more constructive criticism.

  • I can’t work out what his log in problems were, but he seems to have that dialed now.  To be honest, he’s pretty bad with passwords anyway, so it could just have been user error.
  • All the modules are set up as Moodle courses with SCORM packages in them.  The SCORM packages are set to open in a separate window, which is causing the pop-up blocker to fire, and then the multiple clicks at the end to go to the next package.  This is poor set-up in my view, and could easily be rectified.
  • The progress and grading issues I’m not sure about.  These could be to do with the way the SCORM packages are set up, or the way they’ve been programmed.  I’m no SCORM expert.  It could be a bug.  The one that does look like a bug is the way the overall progress bar works.  As you can see from this screenshot I took, he has progress in individual courses, but the overall bar at the top hasn’t moved. 

Image

  • Most of his concerns though are about the content of the learning material.  I think that’s a good thing because it means that most of the time he is focusing on his learning and not on the system.  But, whoever built the assessments needs a slap for the way the questions/answers/feedback are poorly wired up. 

My husband is no different from any other e-learning student, so what he experiences and how he responds are presumably pretty typical. Here’s what I learned from all this. 

A student has no trust in the system if he answers a question with the obviously correct answer and is told (and presumably graded) that he got it wrong, or if he does things and the progress bars don’t move. 

A system can give a bad impression even if the system itself is not at fault. The content is king and where students can not separate content issues from system ones, so the system gets the blame.  I wonder how often that is true when people complain? But we developers can do little about it. 

I’m surprised by how worried students can get about this stuff.  To my eye, they’re not major issues, but he’s really unnerved by them.  He knows that he’s being judged by his scores, and he’s paranoid that he’s wasting his time.  That’s bound to be true of OU students too. 

We have a duty to make things as easy as possible for them to use, and to make sure everything is logical, sensible and understandable in the features we build and the promptness with which we fix issues.  It is notoriously hard to remove features from systems but I wonder what we could do to discourage ones (like pop-up windows) that have poor usability and encourage people into better practice.

Now, how do I report a potential bug in Totara?  Maybe I can’t since I don’t even know what version they’re running :(.

 

 

A few days ago we rolled out some new features.  So far things don’t seem to have broken catastrophically, so I thought now might be a good time to publicize them a bit!

  • Quickly update your page to see annotations other people have added while you’re reading
  • See a list of all annotations on the page from the toolbar

Image

Image

  • The toolbar will now work on a range of modern mobile apple and android devices
  • Rich text formatting for your comments

Image

  • Best-guess approach to fixing pins which have got lost (broken) because the page has changed since the annotation was made

Plus a lot of updating of underlying libraries (including to Silverstripe 3 and jquery 10) which will make things easier for us to maintain.

A fairly short list, but quite a lot of changes. I hope people like them!

Follow

Get every new post delivered to your Inbox.

Join 272 other followers