Last week I wrote a post about the OU’s Moodle codebase big numbers.  One resulting big number was the hits to my blog!  Another was the amount of Twitter conversation generated.

Some of that was positive:

Others less so:

Willem has a point, and I promised to answer his question in a blog post. So here it is.

(By the way Tim blogged on a similar topic a year or so ago, explaining how we merge a new Moodle version into our code, when we had 212 custom plug-ins.)

Using the way Moodle classifies plug-ins, here’s what we’ve got:

  • 45 local extensions
  • 27 reports
  • 26 activities
    • including 38 activity sub-plug-ins
  • 36 blocks
  • 14 question types
  • 10 themes
  • 10 admin tools
  • 7 filters
  • 4 question behaviours
  • 4 enrolment methods
  • 3 authentication methods
  • 3 course formats
  • 2 html editor extensions
  • 2 messaging methods
  • 2 repositories
  • 1 profile field
  • 1 portfolio extension
  • 1 cache store

Phew!  That’s a lot.  So, some of the clues to the complexity are in “10 themes”, “3 course formats”, and “3 authentication methods”.

We’re not just running one Moodle instance with this lot.  There are 9 actually.  This code base includes 43 plug-ins for sites which aren’t our core VLE.  Like this:

  • 27 plug-ins for look & feel and additional functionality (workflow, admin and user activities) in OpenLearn, OpenLearnWorks and OpenLearn Wales.
  • 8 plug-ins for different look & feel and activities in the Open Science Lab
  • 3 plug-ins for look & feel and activities in our Qualifications home
  • 3 plug-ins for security and look & feel in our secure online exams environment
  • 2 plug-ins for authentication and additional security in the Prisoners walled garden

But I can hear Willem still … That’s still just under 200 – why so many?  OK, so take off the 38 sub plug-ins as well.  These are just ways we manage the code better, rather than additional features.  They allow us to switch things on and off, or allow extra features to be built in an activity in a structured way.  For example most are in our forum activity to allow printing, merging, deleting etc of posts.

The OU’s moodle is pretty big in other ways too – lots of courses, lots of students.  Around 30 of the plug-ins help us manage that complexity through automation of course creation and integration with other systems containing student records, grades etc.  A further 15 help us to monitor the system e.g. through connecting to monitoring tools, or generating load testing scripts.

Now there’s about 100 left.  Still a lot.  Some are deprecated. We’d like to merge half the themes together when we do a full responsive/adaptive UI revamp.  We have two different synchronous collaboration plug-ins because we’ve changed suppliers recently, but students need access to tutorial recordings in the older platform.  And on that note, about 10 allow integration with other systems such as Google Apps or student messaging.  Other plug-ins we wouldn’t need if we started now, but it’ll take us time to move across to  newer core features like course formats that offer multiple pages.  And some is frankly bonkers – why we need 4 or 5 plug-ins for search, I’m not entirely sure!

One thing that struck me was the number of reports.  There are constant calls for more analytics, more reporting.  We occasionally  talk about how this should be done outside Moodle – looks like that’s just talk.  I’d really like to see us work on this soon, and maybe we could loose a few plug-ins by passing the work to a system better suited to the job.

Even then we’re left with a pretty big number – about 40 if you’re still counting!  You can take this as a positive sign – look how well you can configure Moodle to do whatever you want.  Or as a negative one – look how much effort to get anything useful.  I won’t comment either way on that!  What I will say is that we’ve used plug-ins rather than customising the core product, so some of the “local” extensions are for that reason.  We feel it is better to have a custom plug-in than hacked core.  And we’ve got a range of activities that provide a rich study experience – including audio recording, a virtual design studio, complex assessment engines, collaborative document editing, forums that can withstand large numbers of students, and rich support for maths and chemical notation.

Have we got too many plug-ins?  Maybe.  But then again we keep finding reasons to add more, so maybe not.  And actually how many is too many?  If they’re small (and some are), then an additional plug-in isn’t a significant overhead, especially if it is suitable provided with unit and behavioural automated tests.

 

 

 

I’ve said before that I think that software metrics should be qualitative not quantitative, but some times you just have to give in to the demand for big impressive numbers and generate them.

This week I have been asked to provide a number of management metrics for each of our systems: lines of code, maintainability, automated test coverage.

Now we could argue all day about whether lines of code is helpful or not, but they want it so here we go.

I started off with a unix command found on stackexchange find –name “*” | xargs cat | wc –l .  Which roughly does the job, but includes images and other cruddy files.  So then I found a little tool called cloc http://cloc.sourceforge.net/#Overview which produces a nice table of results with blank lines, comment lines and real lines split out for each of the different sorts of code in your project (php, javascript, css, less ….)  I like this much better.

BUT – cloc isn’t reading .feature files so it isn’t counting any of our behat tests.  That’s a shame.  So it might be better if I ignore tests entirely in this section, and only report them in the test coverage section of my report.  Alternatively, I can teach cloc about .feature files.  That’s probably better. And turns out to be really easy, by adding the following lines to the definitions file:

Behat
filter remove_matches ^\s*#
extension feature
3rd_gen_scale 1.51

For OU-Moodle it came up with just over 2.6 million lines of “real” code, 1.1 million lines of comments and just under 0.5 million blank lines. That’s 4.2 million lines total.

For my “little” Annotate project, I had 0.6 million lines of “real” code and 0.1 million lines of comments in a total of 0.8 million lines of code.

Of course, those include a lot of framework code, libraries etc so it isn’t all code that some-one has personally written here. For Annotate, I can run cloc over specific folders to count OU-generated code.  But for Moodle we have over 200 custom plug-ins so that’s not really an option!  Instead I ran cloc over the master branch of core moodle.  That’s not really comparing eggs with eggs – because OU Moodle is on 2.7, but I wanted a broad idea of how much we had added.  I came up with 1.5 million “real” lines of code, 0.8 million lines of comments and 0.3 million blank lines, so a total of 2.7 million lines.  Which suggests that the OU has added about 0.9 million “real” lines of code.

Next step is mess detection, rather than a developer gut instinct for “maintainability”.  I’ve found http://phpmd.org/download/index.html which might be worth a try.  There doesn’t seem to be much written about Moodle and mess detection.  If you’ve tried it – please let me know?

The 5Ws are a management theory for getting to the bottom of problems, initially designed for industry.  My husband still cringes at the phrase from his days in the automotive trade in Luton. They’re now used for all sorts of things from police interviews to project management.

So I was mulling over the different roles in our agile development teams, thinking about which roles have responsibility for each word and in particular about which ones I should be engaging with in my new role.

Who – who develops which thing?  The scrum teams should self-organise this.  Because we have several scrum teams working on the same project, there’s some work to decide which things go to which team.  At the moment we do this based on “who knows most about that thing at the moment”, but we need to move past that so that we broaden knowledge around the teams – plus sometimes the things we’re being asked to develop don’t match perfectly to individual’s skill sets and we need to manage work better so no-one is overloaded while others are doing low priority things.

What – what are we developing?  The product owner should define this. That’s a slightly simplistic view, as lead developers have a great deal of experience in the best way to solve certain problems that can help to shape what we develop into a better experience for teaching and learning. So while the product owner has the final say, the scrum team has a lot of input.

Where – where are we developing it? No, I don’t mean “at home” or “in Milton Keynes”, but should it be in Moodle, or using one of the other frameworks or in-house systems that we have available to us.  This is the first W where I think I come in with my new role.  Working with lead developers and architects we make rational decisions (I hope) about the best tool for the job.

When – when are we developing a specific thing?  The product owner tells us the priority of the things they’ve asked for.  Then the scrum teams work down the list in order.  Other “things” like support requests, bugs to fix come along and get in the way.  We need to work out a better way to determine when we do these things than we have at the moment.

Why – that’s a product owner one again (wow they have a lot to do!).  I don’t think we give enough emphasis on this at the moment, though developers often question why a thing is being requested in a certain way.  That is an important part of the developer role too, keeping an eye on what’s going on elsewhere in the world, and on good practice in technology enhanced learning, and reflecting that back into our day-to-day work.  Encouraging and providing developers with opportunities to do this is something the whole leadership team should be doing.

How – again, I think this is where my new role fits in.  How we develop things refers to code quality and to best practice in the processes we adopt.  This is where the bulk of my work is focused at the moment as agile is new to every-one, but I hope things will settle down sometime!

Another way of looking at this is to use a RACI grid.  These seem to be all the rage at the OU at the moment.  It helps tease out the way that developers and product owners interact around what and why we develop for example. There’s plenty written on this already about scrum teams in general, but I’d like to delve a bit deeper within our team. One of the things I’d like to get into with our scrum masters is to look at the developers in their teams as individuals with their varying skills in architecture, analysis (business and system), user experience, testing, planning, etc and use the RACI grid to work out who is playing which roles within the scrum, where we have gaps and what training/support/best practice we can draw out of that. More on that another day!

 

The Learning Systems development team at the OU has been planning a bit of a restructure for a while, and finally everything is in place and people are transitioning into new roles.

I have been given the honour of becoming the Development Architect in the new Learning, Media and Collaboration development team.  Don’t google it, its one of those odd OU job titles that doesn’t seem to match industry-known titles.  Maybe you can help me find a better name for it!

I’m still working out exactly what this means in practice,  but it seems to involve oversight of the architectural designs for our systems and they way they link together, and taking the lead on best practice in development, on how we engage with the open source community,  and on delivering quality products.  I guess this blog will need a re-brand!

I have a lot to learn about our media production and collaboration systems.  And I certainly feel that I’m standing on the shoulders of giants in the Moodle development world.  The idea of leading the people in this team is almost nonsensical.  The role is set up as a sort of “first among equals”.  It’s more about generating consensus than preaching from a pulpit.  In some ways this fits with a “servant leader” role, acting as a facilitator for the the team so they can continue to do great development.

Although I’ll be doing a whole lot less actual development in future, I hope to keep my hand in so that I don’t become one of those managers who doesn’t understand what their team actually does.  That’s not working so well so far – meetings 1 : coding 0.

The next big change for the development teams will be to make our processes even more agile.  At the moment, each lead developer runs a team of developers (ok sometimes of themselves) focusing on a particular aspect of our learning systems.  We are now merging teams into fewer agile scrums so that every-one has a chance to work on more of our development strands and so we can flex resource to meet our priorities better.  We’ve already started to merge our “todo” lists into a single prioritised portfolio backlog and estimate in story points, so in a few months we’ll have a better handle on team velocity.

I’m sure I’ll blog more about this later as we work out how this looks for us in practice.

I went to the #Design4Learning conference last week – yes OK, I’m a bit late with the write up, its been a hectic week or two!

The conference was held at the OU, which had the advantage that I could get home in the evening for my son’s birthday, but the disadvantage that I kept popping back into the office to keep on top of things.  It reminded me that while I hate the coffee, lunch, and conference dinner opportunities for informal contact (being an introvert by nature), there is something lacking if you don’t participate at all in them.

It was great to spend a couple of days thinking about what we’re developing, and why, rather than how.  I really value these opportunities to talk to academics.  The focus of the conference was around learning design and learning analytics.  I won’t write up all the sessions that I went to, but here are some of the highlights for me…

Sharon Slade presented about the OU’s ethics policy for learning analytics and the questions that students raised.  The OU is considered to be the first university with an ethics policy for learner analytics. There is a real challenge in working out how we communicate with students what we’re doing.  Students on one hand want a personalised experience, but they don’t want us “snooping”.  They’re happier with feedback based on trends rather than personal activity.  What is clear is that the results of analytics should focus us on what questions to raise with students, rather than making conclusions on what’s going to happen to them.

You can never really know why a student has a pattern of activity, and whether or not they’re likely to fail as a result.  Elsewhere in the conference (I forget when) I heard it said that students who post often in the forums and turn in their assignments on time are most likely to pass.  Back in the 90s when I was an OU student, I certainly handed in my assignments on time, but I never posted in the forums.  I was too shy, too lacking in confidence that I had anything of value to add, too unwilling to expose my own stupidity…  I wonder what the analytics would have said about me… But I passed with flying colours.  Similarly my son (who has learning difficulties), always fails tests, rarely hands in his homework, never raises his hand in class… but you only have to talk to him to know that he is learning. Obviously “learning” isn’t good enough for a university which needs students to pass qualifications to prove success, but we live by different rules in my house! Learning analytics done badly might suggest that a uni should give up on my son.  Done well it should suggest that he needs a lot of extra support to pass despite innate ability.

Simon Cross talked about his Open Education MOOC which formed a blocks of a formal OU course, where students and the  public learned together in the open. Apart from being pleased to see my old project, OpenLearn, being used for this, the thing that most interested me was that students had concerns over what they are paying for with their OU study (worrying perhaps that they didn’t value the assessment, tutor support etc), and that they wanted the badge as well as the TMA grade – and I thought badges were a gimmick!

There was a useful learning design tool from Imperial College London called BLENDT.  You plug in your learning outcomes and it helps you work out what sorts of (online) activities will help meet them. It is based on Blooms taxonomy where objectives are classified as psychomotor, affective or cognitive skills – users pick words in their objectives such as “explain”, “list”, or “discuss” and the system works out which of the skill sets these map to and then presents example activities that best suit meeting those objectives. It is customisable based on factors such as group size, to cater for the fact that some activities don’t work in some situations.  This tool aims to provide a discussion point for teachers to make final decisions on activity mix.  It looked like something that could be very helpful in supporting work to embed learning design in our every-day practice.  I wonder whether we could write a similar Moodle module?  Or maybe some-one already did?

Finally, one fascinating piece of software from Denise Whitelock at the OU’s Institute for Educational Technology was Open Essayist. which lets students upload draft essays to get feedback on structure.  The tool shows you where your key words and phrases are in the essay, helping you ensure you have good start/middle/end, spread of keywords across the essay, and clear connections between concepts.  Because it provides feedback on structure only, it should apply across disciplines.  It has been proven to improve grades, although obviously structure is only one component of grade, so you do have to say something useful and relevant and understand the assessment criteria for your subject to get top marks.  Denise has also found the tool useful for analysing MOOC comment threads and creating paper abstracts.  There was clear demand in the room from OU tutors for us to provide this to all students, especially at level 1.  I would have loved something like this when I first learned how to write essays.

This blog post feels a bit rambling.  I wonder what Open Essayist would make of it!  Anyway, I hope I’ve given you a taste of the conference and some of the ways that analytics may be changing the services we offer to students and the way we design learning in the future.  I hope my team get a chance to be involved in some of these development.

The lead developers have been experimenting with story points recently. I wrote a few weeks back about how I thought we might decide on the meaning of a point.

We decided that our scale would range from 1 to 377, so that the upper end catches the complex, major projects that we sometimes work on.

Then we decided that we’d start with 1 story point being things we’d traditionally have expected to take a quick, experienced developer about an hour.  Working that all the way up the scale, a 377 story point is the sort of thing we’d traditionally have estimated at 6 months or so.

Then we dug around our recent work and our future plans for things that fitted along the scale.  We put them all in a big pot and decided two or three for each level on the scale which were the most useful examples for the future for how to size work.  We came up with a few useful rules where you might want to move up a level because they make the work more complicated:

  • adding a lot of behat and/or unit testing;
  • lots of javascript and/or css;
  • working in an area that is already very complex; and
  • working with a third party community / system integration.

We’ve recently been given a single prioritised backlog from our Learning & Teaching colleagues, so we decided to try out our shiny new story points on their list.  This morning we played planning poker for the very first time.

We set up a spreadsheet with the business description, a supplementary techie speak description, and a column for each person.  Each person then had a couple of days to use the story point exemplars and enter their estimate.  Each person hid their column on completion so that no-one else’s estimate was biased.

To be fair, I found I mostly thought in days still and worked back to story points.  It’s going to take some time to be able to look at a description and think in points instead.

Today we worked out the median of every-one’s scores, picked the nearest point on the scale and decided what we wanted to announce as our final estimate.  I was pleasantly surprised by the amount of consensus we achieved. There were some items with wildly differing opinions but that usually just highlighted that the request was poorly defined still and we had differing assumptions on what we would do.

We decided upon one or two more “rules”:

  • If the range is very diverse because the item is ill-defined, we refuse to estimate until we have more detail.
  • If the median is the same as the estimate from the person who’s subject matter expert for the area of work, we go with that for our final estimate.
  • If the median is one step either side of the estimate from the subject matter expert, that person has final say to overrule or accept the median.
  • If the median is more than one step either side of the estimate from the subject matter expert, further discussion is required to clarify and agree a final estimate.  There were actually very few stories which fell into this category.

Using this approach we managed to get 6 people to estimate just under 100 stories in just under an hour.  That’s a lot quicker than any of us expected.  We ended up with an estimate of just over 2,500 story points for 3 months work by 8.4 people.  Gut instinct suggests that we might manage about 75% of that.  But since there were about a dozen things we couldn’t estimate, maybe that’ll be more like half in reality?

So with my project manager hat on, I now have to work out whether we should have estimated for the “top slice” tasks: support, bug fixing, advice, technical debt stories…  We probably should have, but I’m not sure if the same approach works. Especially for “keep the wheels on” server monitoring activity.

And now that I have a set of estimates for most of the things our business partners want, we have to communicate that back to them.  There will, presumably, be some iteration while we refine the requirements for the things we refused to estimate, attach names to tasks that no-one thought they were doing but which had high priorities, and drop some stuff off the bottom of the list.

I’m really interested to see what we actually deliver at the end of January, so we can for the first time get a feel for how what our development teams velocity might be. Remind me to tell you about that when the time comes!

Well not prizes in this case! No, I’m referring to story points. One of the next things on my to do list is to look into exemplar stories with points to give us a baseline to start from as we move to estimating development in points rather than hours.

In some ways this seems pretty easy:

1) define a 1 point story
2) define your scale
3) think of some things that fit at the higher story points.

One is OK. The smallest piece of Moodle work that I can think of is to update a language string, or change a style on a discrete screen element. These are the sort of thing that take a few minutes to do, plus a bit of testing. Maybe an hour in total. So that’s what I’m starting from as a single point.

Two is OK. We’ll use the common fibonacci approach, so our scale will be 1, 2, 3, 5, 8, 13, 21, 34. And that’s probably far enough. If one point starts off looking a bit like 1 hour, then 34 points is a little over a week’s work. And the theory says that if you’re estimating something as more than a week’s work, then its an epic not a story. We might quote for epics in bigger numbers – lets say 55, 89 and 144. At 144 story points, we’re talking about 4-5 weeks work and our ability to estimate accurately is probably quite poor. Then there would be 233, 377 and 610. Now we’re in the region of 5 months work which we would probably describe as a theme with even less accuracy. And there’s not a lot of point of counting points after this, so the next step is infinity or “nobody knows”.

But for my current focus, I want to categorise up to 34. And it is at step 3 that it starts to get a bit more challenging. Here are the kinds of things that I’ve been thinking about:

* add a configuration setting to control an existing feature
* write a block to display a message
* add a capability to control permission over an existing feature
* add a field to a form and save the result in the database
* add a log event on a specific page

I seem to be able to think of lots of really little things, and lots of really big things, but I’m rather struggling with the things in the middle.

So, do you use story points to estimate Moodle development work? If so, are you willing to share what yours look like?

Follow

Get every new post delivered to your Inbox.

Join 293 other followers