Year: 2015

The Lean Educator

A couple of weeks ago I was on a road trip of 1,000 Ks or so, to teach at some of our remoter locations. Along the way I listened to an audio book of James Womack and Daniel Jones’ “Lean Thinking: Banish Waste and Create Wealth in Your Corporation.” It’s been while since I read their (and Daniel Roos’) excellent book on the Japanese car industry, “The Machine That Changed the World.” Back then I was interested in the ways that many of these ideas had been brought into agile and lean software development. These days I’m more interested in how ideas like lean thinking might apply to education. One thing is for sure, it’s not an easy road. Womack and Jones’ book is largely a series of case studies about the long, slow and endless process of lean transformation in various industries. Several of their stories make it clear that if you’re looking for a simple solution that doesn’t involve a lot of work, lean isn’t it. Also, their examples are overwhelmingly in product, rather than service, industries. What, then, can educators learn from a book that focuses so much on reducing inventory and shortening the supply chain, when our ‘product’ is such stuff as intellectual property and graduating students? The challenge is to look at our systems through new eyes, to truly understand what we mean by the value of education. As Womak and Jones put it;

Why is it so hard to start at the right place, to correctly define value? Partly because most producers want to make what they are already making and partly because many customers only know how to ask for some variant of what they are already getting.

(I should probably point out here that Womak himself had a go at Lean Thinking for Education back in 2006, but his take is a little industrial for my taste.)

Perhaps the core value that we need to take away from lean thinking is ensuring that all the steps in our educational supply chain deliver value to the customer. In other words, look hard at our value stream. We need to consistently ask; does this part of the curriculum deliver value to the customer? Does this form of assessment deliver value to the customer? Does this step in the enrollment process deliver value to the customer? Note that we don’t ask whether any of these things deliver value to the educational institution, though this often seems to be the major priority for some providers. Not that lean thinking ignores this. The point is that by focusing on delivering value, you will reap the benefits from your customers who will reward you for delivering a quality product at low cost.

What reduces value for the customer? Primarily muda (waste.) Importantly, there are two types of muda. Type 1 muda is waste required by fixed components in the way the current system operates (e.g. a student management system) and can’t be removed just yet, while type 2 muda can be eliminated immediately. What kinds of muda do educators encounter? Quite a lot of type 1, probably. Arcane administrative systems supported by poorly integrated software systems, organisational structures that are based on siloed interests and internal competition, turf wars and endless purposeless restructurings, buildings and classrooms from random historical periods and educational fashions, snail-like  curriculum change processes, remote, conservative accreditation bodies, politically manipulated funding processes and an industrial-era dominant ideology that endlessly harks back to an illusory golden age of education.

One of the obvious characteristics of education, as it is delivered by most institutions, is that it operates in batch-and-queue mode. Every semester a set of classes is delivered, over a fixed period of weeks that may have no relationship to how long it takes someone to learn something. Every year a batch of graduates is produced, who may or may not be prepared to take advantage of the opportunities currently offered by the world around them. Lean thinking would suggest this batch mode should be replaced by flow, where learning flows seamlessly by right-sizing what is offered to the learner. Instead of broadcasting batches of content in mass production lecture halls, the lean educator would be engaged in the whole learning value stream, working closely with their colleagues across the total process, not just a batch-block of material.

Another concept related to flow is pull. How might we move from a model that pushes educational content towards the learner, over a timescale dictated by the institution, to one that allows the learner to pull from the system what they need, when they need it?

Further complicating the search for lean thinking is the concept of the lean enterprise. You may be able to squeeze the muda out of your school, department, faculty etc, but the lean enterprise spans the whole value chain, and may involve many organisations. How can you convince all the stakeholders in the value chain to cooperate in becoming a single lean enterprise, setting their own agendas aside?

If you could, what would the lean educator deliver if there was no muda in our systems and education was a lean enterprise based on value, flow and pull? Maybe something like the following;

  1. A curriculum delivered by multiple organisations, tailored to suit the learner
  2. No restrictions on the hours per week, or the total time span of a learning journey, or which learning components may be combined with others
  3. Walk-in / lifelong enrollment. Come in the door, start a class. Come back in 10 years, take another one
  4. Any combination of blended learning delivery modes
  5. Instant assessment feedback
  6. Add your own…

Is that even possible? Well, there are two ways of improving; kaikaku (radical improvement) and kaizen (continuous incremental improvement.)  Both have a role, and neither have an end point, but the starting point can be today. We need to be constantly on the lookout for muda that we can remove from the value stream. Womak and Jones distill lean thinking into 5 principles. Surely it wouldn’t do us any harm to apply these to education?

  1. Precisely specify value by specific product
  2. Identify the value stream for each product
  3. Make value flow without interruptions
  4. Let the customer pull value from the producer
  5. Pursue perfection.

 

OECD Study Validates The Mind Lab by Unitec’s Postgrad Programme

Recently the OECD published a study called Students, Computers and Learning: Making the Connection. Unfortunately the media did the usual thing the media does and made a superficial (mis)reading of the document to come up with the headline Computers ‘do not improve’ pupil results, says OECD, and that was the BBC for heaven’s sake! Of course the report is narrow in scope, in that it focuses on PISA results, and concerns itself with only a handful of external references. It also only grudgingly acknowledges the success in digital teaching and learning in Australia, preferring to focus on its apparent mission to talk up the deadly drill and practice traditional schooling of the Asian territories who play the PISA game so well and make their kids so miserable in the process. No-one, surely, wants to see anti-suicide fences put up round our examination halls? Nevertheless a deeper reading of the document gives more interesting insights that validates our post graduate programme in digital and collaborative learning at The Mind Lab by Unitec. As the OECD report says, ‘technology can support new pedagogies that focus on learners as active participants with tools for inquiry-based pedagogies and collaborative workspaces,’ a philosophy very much in tune with our own. Of most interest, however, is Chapter 8, Implications of Digital Technology for Education Policy and Practice. In a previous study, Pathways to a Better World: Assessing Mobile Learning Policy against UNESCO Guidelines in a New Zealand Case Study, I looked at New Zealand policy in the context of the UNESCO Policy Guidelines for Mobile Learning. One of the conclusions from that piece of research was that it reaffirmed the importance of some core policy recommendations, such as the need to introduce the use of mobile devices into teacher education. The OECD’s much broader study also acknowledges the critical importance of teacher education in making the most of technology in schools; ‘Technology can amplify great teaching but great technology cannot replace poor teaching.‘ It also acknowledges that there are many benefits that PISA cannot measure, including the way that that ‘technology provides great platforms for collaboration among teachers and for their participation in continued professional development, thus empowering them as knowledge professionals and change leaders.’ These three themes of digital tools, collaboration and leadership lie at the heart of our programme. We would wholeheartedly echo the final words of the OECD report: ‘The key elements for success are the teachers, school leaders and other decision makers who have the vision, and the ability, to make the connection between students, computers and learning.’ We share that vision, and are busy giving teachers the same vision, and the ability, to transform education for the better.

(dis)connectivism: a learning theory for the ghost in the machine

One of the most recent attempts at a learning theory is connectivism, which attempts to address the relationship between knowledge and technology. At the same time there is an increasing disconnect between our physical bodies and our digital souls. In a somewhat baffling and opaque paper from 2010 called ‘Academetron, automaton, phantom: uncanny digital pedagogies’, Siân Bayne of the University of Edinburgh addressed the concept of the ‘uncanny’ in online learning. Once the layers are peeled aside, there are some useful ideas to consider. Bayne refers to ‘the multiple synchronicities available to us when we work online…[the] blurring of being and not-being, presence and absence online.‘ Our online lives are schizophrenically littered across multiple contexts, each one demanding a slightly different type of e-presence; an avatar, a profile, a photograph. We spread ourselves thin over the personal, the professional, the store, the auction, the review; constructing at one moment a Facebook life of “success so huge and wholly farcical“, the next, a LinkedIn profile designed to get that elusive new job to make that success less fictional. We lose the distinction between past and present. Chronology blurs. It is indeed uncanny when my dead mother’s Facebook accounts sends me a message, or a Google search tells me that we will have nuclear fusion by…oh… 2011? Alarming news stories of teenage suicide cults, seemingly driven by a desire to achieve digital immortality through physical death seem to take the disconnect between our real and virtual lives to extremes. Perhaps notwithstanding Ryle’s critique of mind-body dualism, we are all becoming ghosts in the machine. Can we ever call them back from heaven? This disconnectivism between a life lived and a fragmented digital artifact should perhaps raise some disquiet as to the role of pedagogy in the age of ghosts. Perhaps one question for educators is how we temper the tendency to make learning a process of digital publication. It sometimes feels as if the default assignment task these says is to ‘broadcast yourself’. Perhaps a better mantra would be, ‘reflect on yourself, protect yourself’ “for the vision of one man lends not its wings to another man.” Some things are better left to the imagination, rather than the app.

Ragile software development – a longitudinal reflection on post-agile methodology

Ok, so there is no such thing as a ‘ragile’ software development method. Nevertheless, for a number of converging reasons, I have recently been given cause to reflect on the history of rapid, lightweight and agile approaches to software development, and the current dominant ideology of methods. I use the label ‘ragile’ as an indicator of where we might have been, or where we might still go, in developing software in the post-agile era.

There’s a scene in Woody Allen’s movie ‘Sleeper‘, where two doctors from 200 years into the future discuss belief about diet. It goes like this:

You mean there was no deep fat? No steak or cream pies or… hot fudge?”

Those were thought to be unhealthy… precisely the opposite of what we now know to be true.”

Scientists tend to realize that this week’s theory is just that, and it may be replaced by a new theory at any time, based on our empirical observations. Diet promoters tend to take the opposite view. Everything in the past was wrong, but now we know the truth. I hope that software developers are more like scientists than fad dietitians, and will embrace change in their thinking.

Received wisdom has it, perhaps, that the problems of the waterfall approach to software development have been overcome by a legion of certified Scrum Masters leading their agile organisations to the continuous delivery of quality applications. If only we were so enlightened. Royce himself, in his famous ‘waterfall’ paper, stated “I believe in this concept, but the implementation described above is risky and invites failure.” He was talking about Figure 2 in his paper, the oft-copied waterfall diagram. It seems that few bothered to read the rest of the paper and the rather more challenging figures within it, consigning the software industry to decades of misunderstanding. Assuming that software development was in a chronic pre-agile crisis is also a misreading of history. Participants at the 1968 NATO conference, which was the apocryphal source of terms like ‘software engineering’ and ‘software crisis’ acknowledged that many large data processing systems were working perfectly well, thank you. DeGrace and Stahl told us that software was a ‘wicked problem’ 1990, but ‘wicked’ does not mean ‘insoluble’, though it does mean that we should not expect there to be one right answer.

The software industry has seen a series of new ideas about software development over the decades, many of which have been based on leveraging improvements in the hardware and software tools available to us. Kent Beck in the first XP book referred to fully utilising these new tools, turning up the dial on all the best practices at once, possibly to 11 (or was that just Spinal Tap?) Almost a decade earlier, James Martin had published his ‘Rapid Application Development‘ book. stressing the value of, among other things, prototyping, code generation, metrics, visualisation tools, process support tools and shared, reusable domain models. Later, in 1996, Steve McConnell’s ‘Rapid Development‘ emphasised many of the same ideas, stressing productivity tools, risk management and best practices. Both authors prefigured many practices of the lightweight (later agile) methods that were emerging in the late 1990s; Iterative, timeboxed development, customer engagement, small teams, adapting to changing requirements and quality assurance.

An underlying theme in rapid development is the concept of domain modelling and automated tools, including for code generation. Similar themes appear in Agile Modelling, Model Driven Development and, Domain Driven Design. Tools like the Eclipse Modeling Framework, and those based on the naked objects pattern, such as Apache Isis, put modelling at the heart of systems development.

The agile methods movement is at a point of change (hmmm, isn’t that the definition of a crisis?) Recent efforts have revisited the lessons of the Japanese car industry, with Lean and Kanban, in a search for ever ‘lighter’ processes, while at the same time a vision of agile as a traditional fixed methodology process has become established (endemic, even.) This has recently caused two signatories of the original agile manifesto to refer to ‘the failure of agile‘ (Andrew Hunt) and state that ‘agile is dead‘ (Dave Thomas). From another perspective, Vaughn Vernon lamented in his book on domain driven design in 2013 that in Scrum “a product backlog is thrust at developers as if it serves as a set of designs”

So, coming back to ‘ragile’, what is it? Well no more than an —acknowledgement that there may be a collection of practices from both rapid and agile —(and model/domain driven development) that remain relevant to the future of software development. Such an approach would emphasise tools, leverage prototypes, include shared domain models, embrace code generation, automate as much as possible, including estimation and project management, and deliver continuously. Such an approach might be considered radical, in the particular sense of going back to the roots of a phenomenon. Some of the ideas of Martin and O’Connell were much harder to do in the 1990s than they are now. Can there be any software developers who do not use code generators of one type or another? They generate object property methods, test stubs, service end points and a host of other components, they refactor code and create user interfaces and database schema. Rails and Grails developers work with domain models as a matter of course, allowing frameworks to build whole architectures automatically. It’s time we rethink how these strands might become a method for the 2020s that is able to cope with the distributed and parallel domain models of the cloud-based, in-database, Internet-of-things, plugin-driven, Web 3.0 applications of the future.

ICANN.sucks

icannsucks

There’s been quite a bit of debate in the press about the new .sucks top level Internet domain, including this article in the New Zealand Herald. It does have its proponents, of course. The nic.sucks website claims that it can be used to ‘foster debate’ and ‘share opinions’. They suggest that it is valuable for cause marketing, consumer advocacy, anti-bullying etc. I can’t help wondering why other less infantile domains can’t be used for these worthy causes. In fact, of course, it’s just a free-for-all that makes individuals and organisations have to run around paying stupid prices for these domains just to protect themselves from Internet trolls. Obviously no-one could have seen that coming, right?

The body responsible for allowing new domain names is ICANN, the Internet Corporation for Assigned Names and Numbers, which claims to be a ‘not-for-profit public-benefit corporation’. I do wonder about the public benefit aspects. The socially aware and compassionate people who suggested the SUCKS domain name were Top Level Spectrum, Inc., Vox Populi Registry Inc. and Dog Bloom, LLC. All concerned charities with our welfare at heart, I’m sure. Vox Populi also won the auction to have the right to extort money from everyone wanting to defend themselves from this domain name. The three SUCKS entries were some of the 1,930 suggestions received by ICANN for new top level domain names in 2012. You can see the full list at http://money.cnn.com/infographic/technology/new-gtld-list/. Most of them were reasonably sensible, if largely self-serving, with lots of corporations wanting their own domains. There were, however, several stupid and destructive suggestions that were clearly rejected out of hand. These included SEX, SEXY, WTF and SUCKS… oh, wait…

I suppose if you make more than half a million dollars from the faceless corporations who suggest a domain like SUCKS (that’s just for making the suggestions – each one cost $185,000) you owe them back, however much collateral damage you cause in the process. Not to mention the millions of dollars you can make from selling the rights to the domain itself, as this list of domain auctions shows. ICANN are now running around trying to close the stable door after the horse has bolted. Too little, too late.

It will be interesting to see who ends up as the owner of http://www.icann.sucks

Kinross Flat and the Amazon Jungle – An Indie Publishing Experience with CreateSpace and Kindle

I recently self-published my first novel, Kinross Flat, via Amazon CreateSpace and Kindle. This post is about my experience of the whole process, which was quite complex but well supported by Amazon’s various self-publishing tools. Amazon is not the only independent publishing platform, and I can’t speak for the relative merits of the alternative channels, so I’m not necessarily claiming that Amazon is the best. However, as the owner of a Kindle, it was the one that came to mind when I started thinking about indie publishing. I’d welcome others’ views on the alternatives.

CreateSpace is basically for print-on-demand, so if you only want to publish an eBook on Kindle then you don’t need it. However, the advantage of CreateSpace is that once you’ve set up your print-on-demand copy, the addition of a Kindle version is practically automatic, so you get both options at no cost. Yes, no cost – the whole thing is basically free (well, up to a point – I’ll get back to that later!)

So, what do you have to do? There are a number of ways of preparing your book for publication, but the best approach, I think, is to use the tools that are provided for you. You will need to register on the CreateSpace website, after which you will have access to an author dashboard that leads you through all the different steps required to publish your book. The easiest way to make sure your book is in the correct format is to download the Word template, which contains all the required styles and layouts. You can choose your book size, but I went with the recommended 6″ x 9″. For cover design, there is a free to use Cover Creator tool, which provides a relatively small number of basic layouts, but all of these can be customised in terms of font and background colour. You can also upload your own cover image and, of course, write all the cover text. The system will generate an ISBN for you, and add it to the cover with a bar code, automatically.

Once you submit your interior file (i.e. the book text), the system automatically checks it for compatibility, then generates a preview file, which you can check on line or download as a PDF. You can also order a preview hard copy, which is probably the best way to proof read it, but you may have to wait several weeks to get it, and you have to pay for it.

Once you approve the preview, after a few more system checks, including a spell check, your book gets released to the Amazon sales channels, but only after a number of other things have been done. You have to fill in a U.S. tax declaration, which will specify how much U.S. withholding tax you will pay on any royalties, based on the country where you are a tax resident. For New Zealand that was 10%. On the subject of royalties, you also have to choose your royalty rate (35% or 70%) and the retail cost of your book (there’s a handy calculator for this, which shows you the royalties you would receive through each distribution channel based on a U.S. dollar price.) Incidentally there are several different distribution channels you can choose from, but since it costs you nothing to choose them all it seems a bit pointless to exclude any.

After your CreateSpace book is published, it’s a simple step to choose to also distribute on Kindle.You get another chance to check the preview on a Kindle emulator, choose whether you want digital rights management, select a retail price, and away you go. The Kindle version appears a day or two later on the Amazon site, and eventually the print and Kindle versions get linked together. Not all Amazon sites will support the print-on-demand version, just Amazon.com, Amazon Europe and the CreateSpace store. The Australian site, for example, will only offer the Kindle version.

So, is it really free? Well, basically, yes. There are all kinds of options on the CreateSpace site to get help with design, formatting, marketing etc., and these can be quite expensive, but as long as you are reasonably computer literate, the tools don’t require much expertise to do everything yourself. You can, if you like, pay for a review on Kirkus, which may or may not be favourable and costs hundreds of dollars. It’s possible that might pay off in sales, but it’s an unknown quantity. You will, of course, have to pay for any preview copies, or any hard copies of the final book, but these are more or less at cost.

Overall, I found the whole process quite fascinating and supportive. I did occasionally get lost in the Amazon jungle, and ended up, for example, filling in the tax form twice, for reasons I still don’t understand. Nevertheless, I’d recommend it to anyone else who like me, regards themselves as an amateur author who just wants to share their work. If you’re a ‘real’ writer I suspect that the more traditional publishing channels are still the best way to go, since ‘indie publishing’, although it sounds cooler, is still just what used to be called ‘vanity publishing’, which doesn’t sound so cool!

Refactoring Coderetreats: In Search of Simple Design

A while ago I posted a blog about the Global Day of Coderetreat. Since then I’ve been gathering and analysing data about coderetreats to see what their benefits are, and how they might be made even more effective. I’ve just written up some of this work in an article for InfoQ (thanks for the opportunity Shane Hastie), which you can find at http://www.infoq.com/articles/refactoring-coderetreats.

The title has two meanings (sort of.) In one respect it’s about changing the design of coderetreats (i.e. refactoring the coderetreat itself) and in another respect it’s about bringing more refactoring activities into a coderetreat in order to focus more directly on the four rules of simple design (for more detail on this in the context of a coderetreat you could try Corey Haines’ book Understanding the 4 Rules of Simple Design.)

I hope the article encourages more software developers to attend and run coderetreats.

Converting a Google Doc to a Kindle Format .mobi File

I recently had a document, written using Google Docs, that I wanted to make available in Kindle format (a .mobi file.) The thing was, I didn’t want to publish it through Amazon, I just wanted to provide a file that could be copied to a Kindle by anyone who wanted to access the material in that format. It turned out to be a little more complicated than I first thought, so if anyone else wants to do the same, I’ll explain how I did it. The thing to remember is that Google and Amazon are competitors, so they’re not going to make it easy to go from one to the other are they? No indeed…

My first thought was to export my Google Doc to Microsoft Word format. The catch seems to be that the usual way of converting a Word document to Kindle format is to upload it using Amazon Kindle Direct Publishing. This wasn’t really what I wanted to do, as I had no intention of publishing the document via Amazon. I just wanted  a tool that would do a local file conversion on my machine. There are some third party apps that claim to do that with a Word file but I picked one at random and it was pretty flaky.

My next approach was to use KindleGen, a command line tool provided by Amazon. This works on several input file formats, but not Microsoft Word. It does, however, convert HTML documents, which is one of the formats that you can export from Google Docs. The problem is that the default CSS styles of the HTML document that Google Docs gives you are not well suited to Kindle. The font sizes will be all over the place because Google Docs generates a style sheet that uses point values for font sizes that look really bad on a Kindle screen. I found when reading the document on my Kindle that only the largest font size setting was readable, and that was too big. The last thing you want is a Kindle doc that doesn’t look like the other books on the reader’s Kindle. For similar reasons I also chose to remove the font family settings, preferring to let the Kindle use its default fonts. However you can leave these alone if you want.

Another issue with the default HTML format is that a couple of useful meta tags are missing from the HTML. Anyway, all this is easily fixed! What does make life a bit difficult is that Google Docs generates the names of its class styles inconsistently. Yes, that’s right, every time it generates an HTML document, it randomly renames the class styles! This completely stuffs up any attempt you might make to set up a reusable style sheet. Thank you Google! (not).

Anyway, here’s the process I followed:

Google Doc to .mobi, step-by-step

1. Start with a Google Doc. Here’s a very simple one

google doc

2. Create a new working folder. I called mine ‘kindlegen’

3. Download KindleGen from the Amazon KindleGen page. It is downloaded as a zipped archive

4. Unzip the archive into your folder

4. Export your Google Doc in HTML format: File -> Download as… -> Web Page (.html, zipped)

google doc menu

5. Unzip the HTML page into your working folder

6. Open the HTML page in a suitable HTML editor. If you don’t have one, a text editor will do (though it makes it harder). Here’s what it looks like in Notepad. Not very human readable as there are no line feeds. You can manually put them in if you find it easier to navigate that way. With a proper HTML editor with color syntax highlighting it’s a lot easier.

notepad

7. You will see that near the beginning of the HTML source is a ‘style’ element containing a large number of internal CSS styles. I changed the font sizes of all of these as I couldn’t be bothered working out which ones were actually being used in my document. You need to replace all the ‘pt’ values for the ‘font-size’ elements with ’em’ values. I chose similar values, for example for 11pt, which is the standard paragraph font size,  I used 1em. For 16pt headings I used 1.5 em, and so on. Basically, it’s more or less a divide by 10 exercise.

For example, here’s the generated entry for the paragraph (p) tag (unlike the various class styles the HTML element styles are at least consistent)

p{color:#000000;font-size:11pt;margin:0;font-family:”Arial”}

My updated version looks like this (I also removed the font-family):

p{color:#000000;font-size:1em;margin:0}

I didn’t find there was a need to replace any of the other parts of the styles. KindleGen will ignore any that don’t apply.

8. If you like, also remove all the ‘font-family’ entries (as above). The Kindle will be able to cope with the fonts used by the Google Doc if you leave them in.

7. By default, the ‘title’ element will contain the original file name, which may not actually be your preferred document title. If you need to, change the content of the ‘title’ element at the beginning of the file to the one you want

<title>My Book Title</title>

8. Near the top of the HTML source you should find the following ‘meta’ element, between the title element and the style element.

<meta content=”text/html; charset=UTF-8″ http-equiv=”content-type”>

Leave this alone, but add the following element above or beneath it:

<meta name=”author” content=”my name“>

If you don’t do this, when the document appears in your Kindle book list, there will be no author name associated with it.

If you want your document to have a cover image (a JPEG), you will also need to add the following element

<meta name=”cover” content=”mycoverfile.jpg”>

This assumes that your cover JPEG is going to be in the same folder as the HTML document when you convert it. If you have a cover image, add it to your working folder.

9. Open a command window in your working folder and run KindleGen against your HTML file:

kindlegen myhtmlfile.html

You may get some warnings, for example if you haven’t defined a cover image, or there are CSS styles that don’t apply. These won’t matter. In this example I didn’t provide a cover file, and the ‘max-width’ CSS property is being ignored.

command

Assuming there are no fatal errors, the tool will create a .mobi file in the same folder.

10. Connect your Kindle using a USB cable. Navigate to the ‘documents’ folder on the Kindle and copy your .mobi file into it (if you want you can put it in a subfolder, The Kindle will still pick it up.)

11. Eject the Kindle and check the book list. You should find your document has been added and is readable.

Here’s my file on my elderly Kindle.

2015-02-25 06.56.39

Sprinting through Lego city

I was recently asked to deliver a one day Scrum workshop that was supposed to conclude with an agile project simulation activity, but there was no specific guidance as to which activity to use. I’ve used several different types of process miniature for agile project management. A few years ago I even wrote one, the Agile (Technique) Hour, with a colleague in the UK, and I use the XP Game with my students. I’ve also found the use of Lego to be a good way to make activities suitably tactile and creative, in particular I’ve used  the latest version of Lego Mindstorms with my my post grad students.

Pondering what to do in the workshop I looked for something that used Lego but was also Scrum-specific. It didn’t take long to find the Scrum Simulation with Lego by Alex Krivitsky. It’s a pretty simple simulation, using the most basic Lego bricks, but works well. The team have to build a city, over three sprints, from a product backlog of story cards provided by the product owner. The best things that came out of our exercise were, I think, the following:

  1. I deliberately didn’t give the team story priorities, or business values. Like an unhelpful customer I told them all my requirements were equally important. All they had were effort points. As a consequence I ended up with a city with no schools or hospitals.
  2. In the first sprint I gave only one of the three ‘C’s the (story) card. I didn’t give them either the conversation (clarifying requirements) or the confirmation (defining acceptance criteria.) As a result the buildings at the end of sprint one were terrible and I rejected nearly all of them. Like a typical customer I didn’t know what I wanted, but I knew that I didn’t want what they had done. After the review and retrospective, quality improved hugely in the second sprint.
  3. In the second sprint the team knew much better what their task was, but their teamwork was dysfunctional. Some found themselves idle while others did their own thing. Again, following the review and retrospective, teamwork improved remarkably in sprint three.
  4. Team velocity was all over the place, (the burndown chart looked like a difficult ski run), but in the end they could have done more stories in sprint three than they had scheduled. They asked if they should add in more stories from the product backlog. I told them no, if you find you finish a sprint early go down the pub. I didn’t get my schools or hospitals, but in real life, I would have a happier team.

Here’s my team’s Lego city. Note the stained glass window in the church and the wheels in the bicycle shop. Good work team!

2015-02-16 16.44.43 2015-02-16 16.44.49 2015-02-16 16.45.25