Year: 2014

Mobile Citizen Science: nQuire-it Site Launched

One of the most important aspects of using a mobile device for learning is being able to use it to interact with your environment. A major part of that is the various sensors that enable you to gather data from your learning context. That has not always been easy to do in the past, with the need to find and install various apps that would allow you to access different combinations of sensors on your device.

Thankfully, the nQuire-it citizen inquiry site has been launched to help young people to develop practical science skills, and the nQuire-it platform includes the Sense-it app, the first open application to unlock the full range of sensors on mobile devices, so that people of any age can do science projects on their phones and tablets.

Sense-it provides a useful list of the sensors available on your particular device. My ‘legacy’ Galaxy SIII doesn’t have anything like the full set of sensors available on some of the newest phones, but still has a reasonable selection, as this screen capture from Sense-it’s handy ‘show/hide sensors’ tool shows.

Screenshot_2014-12-12-10-16-26

Each sensor has an associated tool within the app. These appear on the main screen.

Screenshot_2014-12-12-10-16-31

Each app makes it easy to gather data from the selected sensor. Here, for example, is the light sensor being used to measure the light level in my office.

Screenshot_2014-12-12-10-17-27

The nQuire-it site has lots of projects where you can try out these sensors, and you can also create your own projects. This should prove a great resource for science teachers and learners.

Welcome to the Machine

It seems that, for learning designers, learning analytics (mostly using log and performance data gathered from learning management systems) is the new black. I recently attended the annual conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), where every fourth presentation, it seemed, had something to do with learning analytics. Much of the content of these presentations was on the ‘what’ of learning analytics, i.e. what is technically possible in gathering data about how students are learning? The following question is ‘how’; how do we use this data? Finally we have to address the ‘why’ question; why are we doing this and what is our goal?

Perhaps the most interesting observation was given by Jaclyn Broadbent, talking about the Desire2Learn Intelligent Agent tool: http://ascilite2014.otago.ac.nz/sharing-practice/#78

One of the tasks of these agents is to sent automated, customised emails to students, not only task reminders but also positive feedback on good performance. i.e. the system knows what the students are doing and knows how to send targeted emails that reflect this performance. The ‘why’, of course, is to provide positive feedback in the hope that this will sustain good performance. Apparently, these automated emails are very well received by the students, but hardly any of them realise that these messages are being generated by a machine, rather than being sent personally by the course tutors. Perhaps even more interestingly, the few who did realise that these emails were automated still liked receiving them. Perhaps this is partly because the course tutors created the message templates, so their personalities were still evident in the generated emails. I’d be interested to know if this attitude still prevails as tools like this become more and more common, and the novelty factor wears off. Once every student in higher education is receiving encouraging emails sent by the machine, will they still regard them as positive and valuable? Or will they become the next generation of annoying corporate spam? I guess in the end it depends on the content. As long as we are giving students insights they may not have gained on their own, for example their relative performance compared to their peers on a course, our cyber-motivation may still hold its value.

The Next New Normal

I spent some time over the weekend throwing away old data CDs. Many of these were for courses I’d delivered on customer sites. These days the course tools are shared on a (soon to be obsolete) USB 2 stick. Others were archive disks for my digital photos. I didn’t quite get to throwing those out, but as I put them back in the cupboard I reflected that my current laptop doesn’t have a CD drive (though I have an external drive that I hardly ever use.) These days all my photos get uploaded automatically to DropBox as soon as I’m on WiFi network; no more cables, manual file copying and CD burning. No doubt, should I ever need these CDs again, I’ll have nothing left that can read them. My kids don’t respond to emails, I have to message them using social media. My colleagues judge each other on the dubious statistics generated automatically by the search algorithms in Google Scholar Citations. I video call people on the other side of the planet for free on a disposable mobile device. All of this is, of course, the new normal. Something happened over the last few years that moved our lives into the socio-mobile cloud where we gave up ownership and control for convenience and immediacy.

The question I find myself asking as I trash my old CDs is what will the next new normal be? What will happen to us in the future that will make Facebook, WiFi, smartphones and cloud storage look like clunky old museum pieces? Relentless connectivity will be the first to arrive, since it is already well on the way. The immediate casualty of that will be that the blessed sanctuary of the aeroplane will be absorbed in to the all consuming expectations of 24/7 availability. We will lose what little control we have over our means of communication as the relative privacy of corporate email gets overtaken by misguided attempts to make us more ‘social’. We will lose ownership of any and all data that we generate, as private storage becomes obsolete. We will be unable to define ourselves in any domain other than the digital; your online profiles will be more powerful than the real you. At some point, we will be required to sell the last fragments of our individuality to the needs of corporate greed and national security. If the past is anything to go by, we will do it willingly and blindly, trading our inheritance for a few trinkets.

Re-engineering coderetreats – bringing design to the fore

For the last year or so, one of my research activities has been exploring the design and delivery of coderetreats. Our first article on this topic Coderetreats: Reflective Practice and the Game of Life was published in IEEE Software, the first piece of academic work to be published in this area. In that article we reported on how running a standard coderetreat with our students helped develop their reflective practice. In a post late last year I mentioned the Global Day of Coderetreat. We also gathered data from that event, which raised some interesting questions about how well these activities support the use of test driven development (TDD) and learning about the four principles of simple design. The coderetreat website (coderetreat.org) says that ‘Practicing the basic principles of modular and object-oriented design, developers can improve their ability to write code that minimizes the cost of change over time.’ These principles are outlined elsewhere as the ‘XP Simplity Rules‘. However there was some evidence from our research that the usual approaches to coderetreats were not particularly effective at making these design rules explicitly understood. We also observed that many participants struggled to get started with their first unit test.

To try to address these issues, we re-engineered our most recent coderetreat so that it scaffolded the intended learning outcomes explicitly. For the first session we provided the first unit test, and suggested the second. This had a remarkable effect on how quickly the participants got into the TDD cycle. We also designed each session so that it directly addressed one of the four principles of simple  design, by providing various code and test components, building on the concept of the legacy coderetreats that have been run by others. In fact the last session was very much in the legacy coderetreat style, where we provided some poorly written ‘legacy code’ without tests, which the participants had to refactor by first adding unit tests.

We have yet to analyse the data we gathered in detail, but we do believe that there is a lot of scope to take the coderetreat idea forward with new ways of ensuring that the key elements of design understanding are made explicit in the outcomes.

Why regression testing matters

Quite some time ago I posted a whinging blog about the software I have to use a editor of a journal. Over the last couple of years I have to acknowledge that the occasional improvement has occurred. For example it is now possible to see how many review requests each reviewer has responded to, and the dates of the reviews they returned. This is great, though one issue is that these changes just appear, without any warning. At least when your phone updates its apps, it does at least tell you it’s doing it, even if it doesn’t bother to ask you if it’s OK unless you have to approve various permissions.

Anyway, to the point of this post, which is regression. Of course the term has a number of meanings. I’m not referring here to sexually disturbing regression in Freudian analysis, or the statistical process for estimating the relationships among variables. I mean, of course, regression in the software sense, where an update to software reveals bugs that were not there in the previous version. A recent update to the journal software saw a complete change to the appearance of the main web page. Whether this was an improvement or not from an aesthetic or functional perspective I’m not sure, but the main thing I noticed was that the system has now forgotten all of the authors of all the papers submitted to the journal. This struck me as a not unimportant feature of a journal submission system.

The point of the above observation is of course that it highlights the great importance of regression testing. The danger of software maintenance is always that it may inadvertently break something that worked perfectly well before. Without regression tests you may not find out what you have broken until it is too late (e.g. you have forgotten every author of every paper in your journal submission system.)

Similar stories have been circulating about the rather more high profile Apple iOS 8 operating system update, which among other failures prevented people from making phone calls, still surely the main point of having a mobile phone. http://www.theguardian.com/technology/2014/sep/24/apple-ios8-update-withdrawn-iphones-bugs
Not the first time an Apple update has broken a major feature, as I discussed in a earlier post about problems with FaceTime.

Effective and comprehensive regression testing is not easy, and not something that can be tacked on at the end of the testing process. Rather it has to be part of a full testing strategy that starts with unit testing, and follows though integration testing, acceptance testing, performance testing, exploratory testing, security testing etc. However it is the last barrier between the software bugs and the customer, and deserves more attention that it appears to be getting from some software development teams.

JavaScript as a first programming language? Some pros and cons

Recently I was working with a group of JavaScript developers in Australia. One of them observed (from maintaining code written by other developers) that he felt people coded in JavaScript in various styles, depending on their programming background, and that this stylistic mashup was perhaps a consequence of the fact that no-one used JavaScript as their first programming language, so they brought their stylistic baggage with them from a range of other ‘first’ languages.

Now I don’t know if there is actually no-one out there who first learned to program with JavaScript, but I suspect there would be very few, for good reason. Historically, JavaScript has been hard to write (no decent development environments), hard to debug, hard to test, and the browser runtimes were slow and flaky. That is no longer the case. IDEs like WebStorm make it easy to develop code, and when it is used with the plugin for Chrome, it also provides a full debugging environment. There are now a range of test tools available, including QUnit, and the quality of JavaScript engines in browsers has increased hugely.

So, would JavaScript be a good first programming language? It has some nice features that would make it seem attractive. It supports higher order functions that can be passed around using variables, it supports variadic functions for variable length parameter lists, and supports closures. You could teach people to use functions without the object oriented baggage of something like Java. Once you do want to use objects, its type system does not support all the features of a classical inheritance based language, but on the other hand it is dynamic, so complex data types can be reconstructed at run time, a really cool feature.

What about the down sides? Well there are still a few. The loose type system is a trap for the unwary, as is the lack of block scoping in favour of function scoping (though the ‘let’ keyword will address this once the major browsers all support it), and another danger is the ease with which global variables can be created (either deliberately or accidentally.) Whacky features like hoisting (where you can use a variable before you declare it) might also confuse the beginner, and running your code in a browser might get in the way of focusing on the basics of the language at the expense of being distracted by the UI.

Some of these issues might be addressed with tools like Microsoft’s TypeScript language, which brings type checking to JavaScript, and tedious browser document navigation and UI issues are simplified by libraries such as jQuery.

So, would I want to try teaching JavaScript as a first language? It probably depends on the type of class. For students at the ‘softer’/applied end of computing, learning the pre-eminent language of the Web, with quick and easy routes to seeing something useful happening in a browser, might not be such a bad place to start.

Supervisor-Research Student Translation Guide

Recently we had an issue with one of our postgraduate students who was disappointed with the grade he received for his research report. He had been under the impression from his supervisor that his work was of a high standard, but it was assessed as being very poor. It struck me that this unfortunate circumstance may well have been a simple matter of miscommunication rather than anything  else. It reminded me of a document, of uncertain provenance, that claims to be an ‘Anglo-EU translation guide’ and appears on many sites across the web. It consists of a table with the following columns: ‘what the British say’,  ‘what the British mean’ and ‘what others understand’. With acknowledgements to the originators, I have put together my own version, which attempts to  provide a ‘supervisor-research student translation guide’.

What the research supervisor says What the research supervisor means What the research student thinks has been said
You have chosen an interesting topic The project is both boring and impossible I am a genius
You may have missed one or two references You have not read or understood anything My literature review is almost complete
You may need to sharpen the focus of your conclusions Your work has made no contribution whatsoever My conclusions are important
I hear what you say I disagree entirely and do not expect to see this in the thesis I must put this in the thesis
With the greatest respect… I think you are an idiot They are listening to me
That’s not bad That’s good That’s poor
That is a very brave proposal You are insane They think I have courage
Quite good A bit disappointing Quite good
I would suggest… Do it or you will fail Think about the idea, but do what you like
Oh, incidentally/ by the way The primary purpose of our discussion is… That is not very important
I was a bit disappointed that I am annoyed that It really doesn’t matter
Very interesting That is clearly nonsense They are impressed
I’ll bear it in mind I’ve forgotten it already They will probably do it
I may have misunderstood, but… You have completely missed the point They have misunderstood
Perhaps you should write this up as a paper If you won’t listen to me, perhaps a rejection will wake you up They think my work is publishable
I almost agree I don’t agree at all They agree
I only have a few minor comments Please re-write completely They have found a few typos
Could we consider some other options? I don’t like your idea They have not yet decided
Correct me if I’m wrong I’m right, don’t contradict me They are wrong and need correcting
Up to a point Not in the slightest Partially
It’s time to start nominating your examiners Damn, we’re running out of time I am nearly finished
The examiners have suggested a few changes You deserved to fail but they have generously given you six months to try to salvage the thesis I have a doctorate, I will start applying for jobs

Faking your location – how and why

One of the many interesting types of app you can download on your phone is one that will fake your GPS location. There are quite a few of these on the app stores, but they all work in much the same way. If you turn off the network based location services on your phone, but leave the GPS service on, these apps can fool any other app that uses your GPS location into thinking you are somewhere else (and there are plenty of these – think of all those permissions you say ‘yes’ to without reading them every time you install a new app.) When you run a fake GPS app, all you have to do is click on a map of where you want to pretend to be and hey presto, your boss thinks you’re at a client, your partner thinks you’re working late, your accounts department thinks you’re staying in a cheap hotel and your friends think you’re on holiday in the Cayman Islands.

Well, those seem to be the kind of use cases that people suggest on the Web. I had a rather more humdrum requirement. I was updating a mobile learning game that we have been developing for a long time, and converting it to use the most recent version of the Google Maps API. Unfortunately this renders it impossible to test map based apps in desktop device emulators without some rather nasty hacks, which are technically illegal. The only sensible option is to test your apps on a device, but how do I test a location based app when I’m sitting in the office with the phone connected to the laptop through a cable?

The answer is. of course, to fake your GPS location with one of the aforementioned apps. I used ‘Fake GPS Fake Location’ by Andev, but there are plenty of others (with very similar names). First, I ran the app, then just moved the red dot to where I wanted to pretend to be. In this case, I wanted to generate some screen captures of the game being used at Hanyang University in Seoul, Korea, where we had been testing the game, so I positioned the dot there. Here’s what the screen looks like in the Fake Location app.

Image

Next, I ran my own app, and it was fooled into thinking I was in Seoul. Here’s my app, running on the device in Auckland, reading the GPS as normal, but getting fooled.

Image

Faking your GPS location may have all kinds of strange and sneaky uses, but from the perspective of testing location based mobile apps it’s a great tool.

Validating Hofstede? Some reflections on cultural differences

This semester, for the second year running, I tried out a small experiment with one of my classes to see if the students cultural profile matched Hofstede’s results for New Zealand:

http://geert-hofstede.com/new-zealand.html

I asked the students to fill in the VSM94 questionnaire, then put their results (anonymously) into a shared Google doc that calculated the results according to the algorithms outlined in the VSM94 manual. The sample size was only 30, so take the results with a pinch of salt, but the outcome was very interesting, and replicated the results from last year. Despite the fact that the majority of my students were not born in New Zealand, the results of our survey correlated quite closely with Hofstede’s figures, apart from masculinity:

Cultural Measure Hofstede Result Student Result
Power Distance 22 20
Individualism 79

72
Masculinity 58 33
Uncertainty Avoidance 49 58

Certainly wider New Zealand society has masculine traits with a strong emphasis on competing in sport, aggressive outdoor activities and terrible driving, but perhaps students have a different subculture? However it is the similarities that interest me more than the differences, suggesting that we absorb the culture around us  quite quickly, given the short time that some of my students have been in New Zealand.