Month: December 2014

Mobile Citizen Science: nQuire-it Site Launched

One of the most important aspects of using a mobile device for learning is being able to use it to interact with your environment. A major part of that is the various sensors that enable you to gather data from your learning context. That has not always been easy to do in the past, with the need to find and install various apps that would allow you to access different combinations of sensors on your device.

Thankfully, the nQuire-it citizen inquiry site has been launched to help young people to develop practical science skills, and the nQuire-it platform includes the Sense-it app, the first open application to unlock the full range of sensors on mobile devices, so that people of any age can do science projects on their phones and tablets.

Sense-it provides a useful list of the sensors available on your particular device. My ‘legacy’ Galaxy SIII doesn’t have anything like the full set of sensors available on some of the newest phones, but still has a reasonable selection, as this screen capture from Sense-it’s handy ‘show/hide sensors’ tool shows.

Screenshot_2014-12-12-10-16-26

Each sensor has an associated tool within the app. These appear on the main screen.

Screenshot_2014-12-12-10-16-31

Each app makes it easy to gather data from the selected sensor. Here, for example, is the light sensor being used to measure the light level in my office.

Screenshot_2014-12-12-10-17-27

The nQuire-it site has lots of projects where you can try out these sensors, and you can also create your own projects. This should prove a great resource for science teachers and learners.

Welcome to the Machine

It seems that, for learning designers, learning analytics (mostly using log and performance data gathered from learning management systems) is the new black. I recently attended the annual conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), where every fourth presentation, it seemed, had something to do with learning analytics. Much of the content of these presentations was on the ‘what’ of learning analytics, i.e. what is technically possible in gathering data about how students are learning? The following question is ‘how’; how do we use this data? Finally we have to address the ‘why’ question; why are we doing this and what is our goal?

Perhaps the most interesting observation was given by Jaclyn Broadbent, talking about the Desire2Learn Intelligent Agent tool: http://ascilite2014.otago.ac.nz/sharing-practice/#78

One of the tasks of these agents is to sent automated, customised emails to students, not only task reminders but also positive feedback on good performance. i.e. the system knows what the students are doing and knows how to send targeted emails that reflect this performance. The ‘why’, of course, is to provide positive feedback in the hope that this will sustain good performance. Apparently, these automated emails are very well received by the students, but hardly any of them realise that these messages are being generated by a machine, rather than being sent personally by the course tutors. Perhaps even more interestingly, the few who did realise that these emails were automated still liked receiving them. Perhaps this is partly because the course tutors created the message templates, so their personalities were still evident in the generated emails. I’d be interested to know if this attitude still prevails as tools like this become more and more common, and the novelty factor wears off. Once every student in higher education is receiving encouraging emails sent by the machine, will they still regard them as positive and valuable? Or will they become the next generation of annoying corporate spam? I guess in the end it depends on the content. As long as we are giving students insights they may not have gained on their own, for example their relative performance compared to their peers on a course, our cyber-motivation may still hold its value.