Refactoring Coderetreats: In Search of Simple Design

Refactoring Coderetreats: In Search of Simple Design

A while ago I posted a blog about the Global Day of Coderetreat. Since then I’ve been gathering and analysing data about coderetreats to see what their benefits are, and how they might be made even more effective. I’ve just written up some of this work in an article for InfoQ (thanks for the opportunity Shane Hastie), which you can find at http://www.infoq.com/articles/refactoring-coderetreats.

The title has two meanings (sort of.) In one respect it’s about changing the design of coderetreats (i.e. refactoring the coderetreat itself) and in another respect it’s about bringing more refactoring activities into a coderetreat in order to focus more directly on the four rules of simple design (for more detail on this in the context of a coderetreat you could try Corey Haines’ book Understanding the 4 Rules of Simple Design.)

I hope the article encourages more software developers to attend and run coderetreats.

Converting a Google Doc to a Kindle Format .mobi File

I recently had a document, written using Google Docs, that I wanted to make available in Kindle format (a .mobi file.) The thing was, I didn’t want to publish it through Amazon, I just wanted to provide a file that could be copied to a Kindle by anyone who wanted to access the material in that format. It turned out to be a little more complicated than I first thought, so if anyone else wants to do the same, I’ll explain how I did it. The thing to remember is that Google and Amazon are competitors, so they’re not going to make it easy to go from one to the other are they? No indeed…

My first thought was to export my Google Doc to Microsoft Word format. The catch seems to be that the usual way of converting a Word document to Kindle format is to upload it using Amazon Kindle Direct Publishing. This wasn’t really what I wanted to do, as I had no intention of publishing the document via Amazon. I just wanted  a tool that would do a local file conversion on my machine. There are some third party apps that claim to do that with a Word file but I picked one at random and it was pretty flaky.

My next approach was to use KindleGen, a command line tool provided by Amazon. This works on several input file formats, but not Microsoft Word. It does, however, convert HTML documents, which is one of the formats that you can export from Google Docs. The problem is that the default CSS styles of the HTML document that Google Docs gives you are not well suited to Kindle. The font sizes will be all over the place because Google Docs generates a style sheet that uses point values for font sizes that look really bad on a Kindle screen. I found when reading the document on my Kindle that only the largest font size setting was readable, and that was too big. The last thing you want is a Kindle doc that doesn’t look like the other books on the reader’s Kindle. For similar reasons I also chose to remove the font family settings, preferring to let the Kindle use its default fonts. However you can leave these alone if you want.

Another issue with the default HTML format is that a couple of useful meta tags are missing from the HTML. Anyway, all this is easily fixed! What does make life a bit difficult is that Google Docs generates the names of its class styles inconsistently. Yes, that’s right, every time it generates an HTML document, it randomly renames the class styles! This completely stuffs up any attempt you might make to set up a reusable style sheet. Thank you Google! (not).

Anyway, here’s the process I followed:

Google Doc to .mobi, step-by-step

1. Start with a Google Doc. Here’s a very simple one

google doc

2. Create a new working folder. I called mine ‘kindlegen’

3. Download KindleGen from the Amazon KindleGen page. It is downloaded as a zipped archive

4. Unzip the archive into your folder

4. Export your Google Doc in HTML format: File -> Download as… -> Web Page (.html, zipped)

google doc menu

5. Unzip the HTML page into your working folder

6. Open the HTML page in a suitable HTML editor. If you don’t have one, a text editor will do (though it makes it harder). Here’s what it looks like in Notepad. Not very human readable as there are no line feeds. You can manually put them in if you find it easier to navigate that way. With a proper HTML editor with color syntax highlighting it’s a lot easier.

notepad

7. You will see that near the beginning of the HTML source is a ‘style’ element containing a large number of internal CSS styles. I changed the font sizes of all of these as I couldn’t be bothered working out which ones were actually being used in my document. You need to replace all the ‘pt’ values for the ‘font-size’ elements with ’em’ values. I chose similar values, for example for 11pt, which is the standard paragraph font size,  I used 1em. For 16pt headings I used 1.5 em, and so on. Basically, it’s more or less a divide by 10 exercise.

For example, here’s the generated entry for the paragraph (p) tag (unlike the various class styles the HTML element styles are at least consistent)

p{color:#000000;font-size:11pt;margin:0;font-family:”Arial”}

My updated version looks like this (I also removed the font-family):

p{color:#000000;font-size:1em;margin:0}

I didn’t find there was a need to replace any of the other parts of the styles. KindleGen will ignore any that don’t apply.

8. If you like, also remove all the ‘font-family’ entries (as above). The Kindle will be able to cope with the fonts used by the Google Doc if you leave them in.

7. By default, the ‘title’ element will contain the original file name, which may not actually be your preferred document title. If you need to, change the content of the ‘title’ element at the beginning of the file to the one you want

<title>My Book Title</title>

8. Near the top of the HTML source you should find the following ‘meta’ element, between the title element and the style element.

<meta content=”text/html; charset=UTF-8″ http-equiv=”content-type”>

Leave this alone, but add the following element above or beneath it:

<meta name=”author” content=”my name“>

If you don’t do this, when the document appears in your Kindle book list, there will be no author name associated with it.

If you want your document to have a cover image (a JPEG), you will also need to add the following element

<meta name=”cover” content=”mycoverfile.jpg”>

This assumes that your cover JPEG is going to be in the same folder as the HTML document when you convert it. If you have a cover image, add it to your working folder.

9. Open a command window in your working folder and run KindleGen against your HTML file:

kindlegen myhtmlfile.html

You may get some warnings, for example if you haven’t defined a cover image, or there are CSS styles that don’t apply. These won’t matter. In this example I didn’t provide a cover file, and the ‘max-width’ CSS property is being ignored.

command

Assuming there are no fatal errors, the tool will create a .mobi file in the same folder.

10. Connect your Kindle using a USB cable. Navigate to the ‘documents’ folder on the Kindle and copy your .mobi file into it (if you want you can put it in a subfolder, The Kindle will still pick it up.)

11. Eject the Kindle and check the book list. You should find your document has been added and is readable.

Here’s my file on my elderly Kindle.

2015-02-25 06.56.39

Sprinting through Lego city

I was recently asked to deliver a one day Scrum workshop that was supposed to conclude with an agile project simulation activity, but there was no specific guidance as to which activity to use. I’ve used several different types of process miniature for agile project management. A few years ago I even wrote one, the Agile (Technique) Hour, with a colleague in the UK, and I use the XP Game with my students. I’ve also found the use of Lego to be a good way to make activities suitably tactile and creative, in particular I’ve used  the latest version of Lego Mindstorms with my my post grad students.

Pondering what to do in the workshop I looked for something that used Lego but was also Scrum-specific. It didn’t take long to find the Scrum Simulation with Lego by Alex Krivitsky. It’s a pretty simple simulation, using the most basic Lego bricks, but works well. The team have to build a city, over three sprints, from a product backlog of story cards provided by the product owner. The best things that came out of our exercise were, I think, the following:

  1. I deliberately didn’t give the team story priorities, or business values. Like an unhelpful customer I told them all my requirements were equally important. All they had were effort points. As a consequence I ended up with a city with no schools or hospitals.
  2. In the first sprint I gave only one of the three ‘C’s the (story) card. I didn’t give them either the conversation (clarifying requirements) or the confirmation (defining acceptance criteria.) As a result the buildings at the end of sprint one were terrible and I rejected nearly all of them. Like a typical customer I didn’t know what I wanted, but I knew that I didn’t want what they had done. After the review and retrospective, quality improved hugely in the second sprint.
  3. In the second sprint the team knew much better what their task was, but their teamwork was dysfunctional. Some found themselves idle while others did their own thing. Again, following the review and retrospective, teamwork improved remarkably in sprint three.
  4. Team velocity was all over the place, (the burndown chart looked like a difficult ski run), but in the end they could have done more stories in sprint three than they had scheduled. They asked if they should add in more stories from the product backlog. I told them no, if you find you finish a sprint early go down the pub. I didn’t get my schools or hospitals, but in real life, I would have a happier team.

Here’s my team’s Lego city. Note the stained glass window in the church and the wheels in the bicycle shop. Good work team!

2015-02-16 16.44.43 2015-02-16 16.44.49 2015-02-16 16.45.25

Mobile Citizen Science: nQuire-it Site Launched

One of the most important aspects of using a mobile device for learning is being able to use it to interact with your environment. A major part of that is the various sensors that enable you to gather data from your learning context. That has not always been easy to do in the past, with the need to find and install various apps that would allow you to access different combinations of sensors on your device.

Thankfully, the nQuire-it citizen inquiry site has been launched to help young people to develop practical science skills, and the nQuire-it platform includes the Sense-it app, the first open application to unlock the full range of sensors on mobile devices, so that people of any age can do science projects on their phones and tablets.

Sense-it provides a useful list of the sensors available on your particular device. My ‘legacy’ Galaxy SIII doesn’t have anything like the full set of sensors available on some of the newest phones, but still has a reasonable selection, as this screen capture from Sense-it’s handy ‘show/hide sensors’ tool shows.

Screenshot_2014-12-12-10-16-26

Each sensor has an associated tool within the app. These appear on the main screen.

Screenshot_2014-12-12-10-16-31

Each app makes it easy to gather data from the selected sensor. Here, for example, is the light sensor being used to measure the light level in my office.

Screenshot_2014-12-12-10-17-27

The nQuire-it site has lots of projects where you can try out these sensors, and you can also create your own projects. This should prove a great resource for science teachers and learners.

Welcome to the Machine

It seems that, for learning designers, learning analytics (mostly using log and performance data gathered from learning management systems) is the new black. I recently attended the annual conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), where every fourth presentation, it seemed, had something to do with learning analytics. Much of the content of these presentations was on the ‘what’ of learning analytics, i.e. what is technically possible in gathering data about how students are learning? The following question is ‘how’; how do we use this data? Finally we have to address the ‘why’ question; why are we doing this and what is our goal?

Perhaps the most interesting observation was given by Jaclyn Broadbent, talking about the Desire2Learn Intelligent Agent tool: http://ascilite2014.otago.ac.nz/sharing-practice/#78

One of the tasks of these agents is to sent automated, customised emails to students, not only task reminders but also positive feedback on good performance. i.e. the system knows what the students are doing and knows how to send targeted emails that reflect this performance. The ‘why’, of course, is to provide positive feedback in the hope that this will sustain good performance. Apparently, these automated emails are very well received by the students, but hardly any of them realise that these messages are being generated by a machine, rather than being sent personally by the course tutors. Perhaps even more interestingly, the few who did realise that these emails were automated still liked receiving them. Perhaps this is partly because the course tutors created the message templates, so their personalities were still evident in the generated emails. I’d be interested to know if this attitude still prevails as tools like this become more and more common, and the novelty factor wears off. Once every student in higher education is receiving encouraging emails sent by the machine, will they still regard them as positive and valuable? Or will they become the next generation of annoying corporate spam? I guess in the end it depends on the content. As long as we are giving students insights they may not have gained on their own, for example their relative performance compared to their peers on a course, our cyber-motivation may still hold its value.

The Next New Normal

I spent some time over the weekend throwing away old data CDs. Many of these were for courses I’d delivered on customer sites. These days the course tools are shared on a (soon to be obsolete) USB 2 stick. Others were archive disks for my digital photos. I didn’t quite get to throwing those out, but as I put them back in the cupboard I reflected that my current laptop doesn’t have a CD drive (though I have an external drive that I hardly ever use.) These days all my photos get uploaded automatically to DropBox as soon as I’m on WiFi network; no more cables, manual file copying and CD burning. No doubt, should I ever need these CDs again, I’ll have nothing left that can read them. My kids don’t respond to emails, I have to message them using social media. My colleagues judge each other on the dubious statistics generated automatically by the search algorithms in Google Scholar Citations. I video call people on the other side of the planet for free on a disposable mobile device. All of this is, of course, the new normal. Something happened over the last few years that moved our lives into the socio-mobile cloud where we gave up ownership and control for convenience and immediacy.

The question I find myself asking as I trash my old CDs is what will the next new normal be? What will happen to us in the future that will make Facebook, WiFi, smartphones and cloud storage look like clunky old museum pieces? Relentless connectivity will be the first to arrive, since it is already well on the way. The immediate casualty of that will be that the blessed sanctuary of the aeroplane will be absorbed in to the all consuming expectations of 24/7 availability. We will lose what little control we have over our means of communication as the relative privacy of corporate email gets overtaken by misguided attempts to make us more ‘social’. We will lose ownership of any and all data that we generate, as private storage becomes obsolete. We will be unable to define ourselves in any domain other than the digital; your online profiles will be more powerful than the real you. At some point, we will be required to sell the last fragments of our individuality to the needs of corporate greed and national security. If the past is anything to go by, we will do it willingly and blindly, trading our inheritance for a few trinkets.

Re-engineering coderetreats – bringing design to the fore

For the last year or so, one of my research activities has been exploring the design and delivery of coderetreats. Our first article on this topic Coderetreats: Reflective Practice and the Game of Life was published in IEEE Software, the first piece of academic work to be published in this area. In that article we reported on how running a standard coderetreat with our students helped develop their reflective practice. In a post late last year I mentioned the Global Day of Coderetreat. We also gathered data from that event, which raised some interesting questions about how well these activities support the use of test driven development (TDD) and learning about the four principles of simple design. The coderetreat website (coderetreat.org) says that ‘Practicing the basic principles of modular and object-oriented design, developers can improve their ability to write code that minimizes the cost of change over time.’ These principles are outlined elsewhere as the ‘XP Simplity Rules‘. However there was some evidence from our research that the usual approaches to coderetreats were not particularly effective at making these design rules explicitly understood. We also observed that many participants struggled to get started with their first unit test.

To try to address these issues, we re-engineered our most recent coderetreat so that it scaffolded the intended learning outcomes explicitly. For the first session we provided the first unit test, and suggested the second. This had a remarkable effect on how quickly the participants got into the TDD cycle. We also designed each session so that it directly addressed one of the four principles of simple  design, by providing various code and test components, building on the concept of the legacy coderetreats that have been run by others. In fact the last session was very much in the legacy coderetreat style, where we provided some poorly written ‘legacy code’ without tests, which the participants had to refactor by first adding unit tests.

We have yet to analyse the data we gathered in detail, but we do believe that there is a lot of scope to take the coderetreat idea forward with new ways of ensuring that the key elements of design understanding are made explicit in the outcomes.

Why regression testing matters

Quite some time ago I posted a whinging blog about the software I have to use a editor of a journal. Over the last couple of years I have to acknowledge that the occasional improvement has occurred. For example it is now possible to see how many review requests each reviewer has responded to, and the dates of the reviews they returned. This is great, though one issue is that these changes just appear, without any warning. At least when your phone updates its apps, it does at least tell you it’s doing it, even if it doesn’t bother to ask you if it’s OK unless you have to approve various permissions.

Anyway, to the point of this post, which is regression. Of course the term has a number of meanings. I’m not referring here to sexually disturbing regression in Freudian analysis, or the statistical process for estimating the relationships among variables. I mean, of course, regression in the software sense, where an update to software reveals bugs that were not there in the previous version. A recent update to the journal software saw a complete change to the appearance of the main web page. Whether this was an improvement or not from an aesthetic or functional perspective I’m not sure, but the main thing I noticed was that the system has now forgotten all of the authors of all the papers submitted to the journal. This struck me as a not unimportant feature of a journal submission system.

The point of the above observation is of course that it highlights the great importance of regression testing. The danger of software maintenance is always that it may inadvertently break something that worked perfectly well before. Without regression tests you may not find out what you have broken until it is too late (e.g. you have forgotten every author of every paper in your journal submission system.)

Similar stories have been circulating about the rather more high profile Apple iOS 8 operating system update, which among other failures prevented people from making phone calls, still surely the main point of having a mobile phone. http://www.theguardian.com/technology/2014/sep/24/apple-ios8-update-withdrawn-iphones-bugs
Not the first time an Apple update has broken a major feature, as I discussed in a earlier post about problems with FaceTime.

Effective and comprehensive regression testing is not easy, and not something that can be tacked on at the end of the testing process. Rather it has to be part of a full testing strategy that starts with unit testing, and follows though integration testing, acceptance testing, performance testing, exploratory testing, security testing etc. However it is the last barrier between the software bugs and the customer, and deserves more attention that it appears to be getting from some software development teams.

JavaScript as a first programming language? Some pros and cons

Recently I was working with a group of JavaScript developers in Australia. One of them observed (from maintaining code written by other developers) that he felt people coded in JavaScript in various styles, depending on their programming background, and that this stylistic mashup was perhaps a consequence of the fact that no-one used JavaScript as their first programming language, so they brought their stylistic baggage with them from a range of other ‘first’ languages.

Now I don’t know if there is actually no-one out there who first learned to program with JavaScript, but I suspect there would be very few, for good reason. Historically, JavaScript has been hard to write (no decent development environments), hard to debug, hard to test, and the browser runtimes were slow and flaky. That is no longer the case. IDEs like WebStorm make it easy to develop code, and when it is used with the plugin for Chrome, it also provides a full debugging environment. There are now a range of test tools available, including QUnit, and the quality of JavaScript engines in browsers has increased hugely.

So, would JavaScript be a good first programming language? It has some nice features that would make it seem attractive. It supports higher order functions that can be passed around using variables, it supports variadic functions for variable length parameter lists, and supports closures. You could teach people to use functions without the object oriented baggage of something like Java. Once you do want to use objects, its type system does not support all the features of a classical inheritance based language, but on the other hand it is dynamic, so complex data types can be reconstructed at run time, a really cool feature.

What about the down sides? Well there are still a few. The loose type system is a trap for the unwary, as is the lack of block scoping in favour of function scoping (though the ‘let’ keyword will address this once the major browsers all support it), and another danger is the ease with which global variables can be created (either deliberately or accidentally.) Whacky features like hoisting (where you can use a variable before you declare it) might also confuse the beginner, and running your code in a browser might get in the way of focusing on the basics of the language at the expense of being distracted by the UI.

Some of these issues might be addressed with tools like Microsoft’s TypeScript language, which brings type checking to JavaScript, and tedious browser document navigation and UI issues are simplified by libraries such as jQuery.

So, would I want to try teaching JavaScript as a first language? It probably depends on the type of class. For students at the ‘softer’/applied end of computing, learning the pre-eminent language of the Web, with quick and easy routes to seeing something useful happening in a browser, might not be such a bad place to start.