It's all gone quantum at Digimakers

Last Saturday I had a great time at Bristol's Digimakers. I regularly attend this superb event, running a stand and get the opportunity to talk computers and science with children, parents and teachers. This time around I focused on Behaviour-Driven Development (which I've covered before) with a side order or LED and ePaper displays for the Raspberry Pi and Pi Zero from Pi-Supply.
Digimakers June 2016

Several organisations and lots of students ran demonstrations, workshops and drop-in help sessions throughout the day. This is something especially neat about Digimakers: it's not focussed on a single technology as the supposed solution to all scenarios, but instead showcases lots of complementary technologies. We had Raspberry Pi, Arduino, custom things that I don't quite understand and more besides all used as the basis for a number of very interesting projects.

The computer science and engineering students from the University of Bristol continue to impress. Anthony really hit the nail with his sound wave generator which produced a fantastic musical accompaniment for the day when hooked up to Apple's Logic Pro X. If you're reading this and looking to hire an audio engineer then he definitely deserves the job!
Andrew's marvellous musical vibrations

Directly opposite was Matthew Hockley with a swarm of cute robots that were running a simple algorithm related to locality of their neighbours triggering different light patterns. We talked about how us fallible humans like to anthropomorphise whenever given a chance to do so and I postulated that the random movement of his swarm would be seen as "good" or "evil" if he put green smiley faces or red angry faces on top of each robot. Matthew agreed that we do tend to read more into such critters than is deserved as they're not really responsible agents (an update to the Three Laws that were just a plot device for Asimov and not something to base a real robot on) as Alan Winfield notes in his excellent, accessible book, Robotics: A Very Short Introduction.


They appear to be benign, but if you look closely you can see them plotting world domination.

Students and a teacher from Cotham School were back with their arcade cabinet, and this time also had two "Mini Me" versions (as I like to think of them) present. Sadly I forgot to get a photo, but these proved extremely popular. I think the brief goes along the lines of: "yes, you can play computer games at school providing you program those games." It's a great idea, very well executed.

Talking of schools: I had a great chat with Stewart Edmondson, CEO of the UK Electronics Skills Foundation. They believe absolutely that teaching software is not enough and that kids should be getting hands on experience of electronics. I wholeheartedly agree! As I started secondary school in the 1980s I caught the last of the software-related computer lessons before "IT" became "ICT" with the "C" somehow (apparently) meaning "Word & Excel". However I never learnt electronics in school and feel very much I'm enormously behind the learning curve here. Although I've built my own circuits, read lots of tutorials in books and The MagPi magazine and bought and experimented with stacks of components it all does feel very unstructured, as though I am missing the fundamental underpinnings that school ought to have taught me. There is a huge benefit to learning things when your brain is still wired to absorb knowledge like a sponge. At Digimakers they brought along an electronics project kit called MicroBox to get those brain cells firing and this proved very popular.

Ok, so what has all this to do with the title of this post? One of the workshops focussed on Quantum Computing for kids (yes, you did read that right!) While I unfortunately was unable to get away from my stand for long enough to listen in I had a wonderful conversation with a 14 year old girl who popped over afterwards. It started in just the way you don't expect a conversation with a teenager to start: "I'm off to Google to study quantum computing as a way to break ciphers." We then conversed about such things, including a detour to discuss the shape of the universe and the relative sizes of different infinities, the difference between passive and active hacking (which, fortunately she is very aware of - this difference needs to be taught in schools!), that she'd spent the morning learning about ciphers in Python in one of the sessions and that she's already up to speed on inspecting web elements and the like... Awesome. This was the highlight of the day for me.

The next Digimakers is on October 29th at At-Bristol. If you are planning on attending you should register in advance as this event is very popular.
Comments

Zumo George avoids Cartmanzilla at CukeUp!

BDD_for_8-year_olds
On Thursday 14th and Friday 15th April I went to CukeUp! London 2016. The Behaviour-Driven Development community met for two days to share ideas and skills relating to my favourite delivery methodology. The event was fantastic and on a par with last year. Inspired by day one, and with an open slot to deliver a lightning talk lasting just five minutes I set about writing a presentation describing Cartmanzilla versus Zumo George... at 2am... after a few beers and a rather tasty gin. The following morning I re-wrote much of my talk to eliminate 95% of the whiz-bang transitions that had somehow crept into several slides (not sure how that happened). For some reason I had also thought a clearly marked slot for a "5 minute" talk was 10 minutes in duration (proof positive that gin slows the passage of time), and quickly edited it again after confirming with Matt Wynne that 5=5 and not 5=10. Still, overall I managed to get 80% of my message across in just minutes.

Skills Matter, who hosted CukeUp! have kindly put a video of my talk online. You will need to register (painless and quick) with Skills Matter to view it.

A quick recap: Cartmanzilla the monster has invaded robot city and the plucky little robots have to keep away from him. Only Zumo George is programmable, and his general behaviours (keep away from monsters) are determined by feature files that contain behavioural specifications written in the Gherkin syntax of BDD:

Given [a precondition]
When [an event]
Then [an outcome]

Each line of the Gherkin causes a related block of test code to be executed, and when every line of test code passes your software is green, i.e.: the behaviours are working as expected.

I've covered Zumo George and the use of BDD with this robot over a few prior posts, including a specific write-up of Cartmanzilla vs Zumo George at Bristol Digimakers. What is interesting as I read back over previous posts, and I noted this in my talk, is that my first attempts to write scenarios were essentially attempts to describe the functional aspects of George, where-as my later attempts are closer to the behaviours that I originally envisaged: sneaking towards the monster when he's not paying attention and fleeing when the monster gives chase.

Comments

George visits Digimakers to learn about whiskers

I spent yesterday at Bristol Digimakers having a fantastic time meeting lots of young people who had come along to the event to learn more about coding, robotics, Minecraft, robotics and robotics. There was definitely a theme going on. Digimakers has grown to be the place go to get hands-on experience of hacking and making. Backed by University of Bristol (kudos to the ever-energetic Caroline who does a great deal of the organising) and supported by a host of students and other individuals running their latest coding and hardware inventions a great vibe could be felt all day long.

As usual at Digimakers I set up a table with various demonstrations using Raspberry Pis, mainly focussed around Zumo George, my Behaviour-Driven Development robot. This time around I also included some Crafty Robots, a Hexbug Ant and Cartmanzilla.

Stand

Cartmanzilla_Zumo_Crafty_Hexbug
Cartmanzilla towers over the city. The little robots wonder how they will escape.

The aim of my table was to present two concepts: firstly programming robots based on defining behavioural outcomes (a right to left approach, for example: Event Storming) rather than a list of functional requirements (a left to right approach that may not lead to the desired outcome) allows non-technical people to be more involved in the creation of the robots that they will share their environment with. I've written about BDD with Zumo George before. Distilling the essence of BDD (conversations that discover outcomes of value that enable us to write tests that drive code) down to something that is easily digestible by youngsters proved challenging, but in general most seemed to understand. I think this was helped by having a working demonstration: Zumo George was given the behaviour of "don't get caught by Cartmanzilla" which in practical terms meant using his inbuilt IR sensor to retreat from Cartmanzilla when he approached, and to advance when Cartmanzilla retreated (all over the top of a lovely cityscape given to me by the great Tim Cox).

Secondly, I wanted to explore the idea of how prey avoids predators (and how predators catch prey) by looking at three different robots:
  • Crazy Robot just moves randomly and cannot react objectively to external stimulus (it can however sometimes bounce off things it bumps into)
  • Hexbug Ant has bump sensors front and rear and therefore can run away from anything it touches.
  • Zumo George can sort-of see (via his infrared sensor) what is in front and respond accordingly.
After playing with Cartmanzilla and the robots I asked two questions of the youngsters who came to my table:
  • If you were a mouse escaping from a cat which method (random, touch, sight) would you use to keep away from the cat?
  • If you were a cat trying to catch a mouse which method would you use?
For the first question everyone said sight, which is the obvious answer, as assuming that there is enough light for the mouse to see then this keeps a decent distance between it and the claws. For the second I was genuinely surprised that about a third of the students realised the cat would likely use a combination of sight and touch. Cats do just this: as they approach prey they primarily use sight, but when they make the final strike their whiskers swing forward to make contact with the prey which helps guide their jaws and claws in. To help reinforce this point I played a snippet from a BBC documentary that covers exactly this:



Watch the whole video or skip forward to 2m15s where they explain why and show a cat doing this. As the cat gets very close to the mouse it can no longer focus so it uses its whiskers to guide the prey to its mouth. If you have a pet cat you can likely see this in action: if your cat chases string or small toys then drag a toy in front of the cat to get it to almost-but-not-quite pounce (you may need to do this several times!) When the cat thinks about pouncing, but then gives up you can often (it's quick) see its whiskers twitch: that's the reflex starting to move them forwards (but stopping as the cat gives in). It is harder to see if the cat does pounce as this happens in the blink of an eye.

The interesting thing here is that my robot, Zumo George would benefit from exactly this kind of whisker technology. The Sharp GP2Y0A41SK0F infrared sensor is effective from about 4cm to 30cm. Hence, when an object is closer than ~4cm the sensor's vision is "blurred" and ineffective. This can be seen on the data sheet for the sensor in the graph on page four, which I have reproduced below. This graph shows the voltage returned on the analog pin for a given distance. Below about 3-4 cm the output voltage becomes wildly inaccurate. This is the point at which George's vision blurs resulting in him sometimes advancing and sometimes retreating, seemingly at random: he becomes little better at this distance at avoiding Cartmanzilla than the Crafty Robots.

Sharp_GP2Y0A41SK0F

Fortunately this is generally not a problem as we define the behaviour of George such that he should not get within 4cm of Cartmanzilla in a .features file that our Behaviour-Driven Development tool of choice (Lettuce in my case) can parse:

Feature: Move around the city avoiding monsters
In order for Zumo George to keep a safe distance from the monsters
As Zumo George
I want to retreat when the monsters get near

Rules:
- Retreat if a monster is less than 15 cm away
- Advance if a monster is greater than 15 cm away

From the above feature we have articulated and agreed a general outcome: don't get trodden on by Cartmanzilla as it will ruin your day. We then continue the conversation to discover scenarios of importance. It turns out that there are three, wrapped up in a single line of background in which we agree how close to Cartmanzilla we think is safe, and we add these to the .feature file:

Background:
Given the minimum distance to the monster is 15cm

Scenario: Advance, there are no monsters
When the distance to the monster is 16cm
Then I should "advance" in relation to the monster

Scenario: Stand still, hopeing the monster won't notice me
When the distance to the monster is 15cm
Then I should "halt" in relation to the monster

Scenario: Retreat, there are monsters
When the distance to the monster is 14cm
Then I should "flee" in relation to the monsters


As you can see, we have defined George's behaviour to be that he should attempt, whenever possible, to stay at least 15cm from Cartmanzilla (the monster).

Behaviour-Driven Development works when people have conversations to discover new and different outcomes. It was great to work with the youngsters at my table to vary the minimum distance for George. We could immediately see the little robot scurry backwards when our outcome was that it was unsafe to be so close to Cartmanzilla or to scuttle forwards to a new minimum distance when we felt the outcome was that it was safe to be closed. Being able to talk about, model and try out the effects of varying outcomes in a safe way without causing George to immediately run amok and leap to certain doom from the table edge was great. The kids definitely seemed to enjoy this modelling exercise, and I did too.

Across the rest of the event a large number of other robots could be seen. Here's Steve. He talks back when you talk to him (and sometimes makes embarrassing mistakes):

Steve
This is Steve. Steve was apparently "getting a bit murderous".

Tim Cox ran an excellent workshop and had set up a cityscape full of vehicles, interconnected traffic lights each using PiStop (available from 4tronix) and an enormous (by comparison) Me Arm controllable by a Raspberry Pi and, I'm guessing, python-curses judging by the look of the output on the screen. I was impressed with the Me Arm. I have previously done something similar using the Maplin Robot Arm and the Pi, but I don't like the imprecise geared motors in the Maplin arm. By contrast the Me Arm was much more precise even though it too is not using stepper motors. The screen you can see is from Banggood.com.
MeArm_TimCox2
"Watch out for the Evil Claw" cried the residents of PiCity.

Someone (sorry Someone, I didn't catch your name) had created a Sentry Gun complete with the required "beep......beep.....beep...beep..beep.beep.beep" heard in the film Aliens from the related motion tracker technology.
TrackingGun2
If you hear this noise run away and hide.

A couple of students presented a fruit-music maker connected to a Raspberry Pi. Their approach was different to what I have seen before as they were not relying on one completing a circuit (touch a wire, and with your other hand touch the fruit to make a sound play), but were instead relying on (we think) a capacitive drop when you touched the fruit ("touch the kiwi fruit and absorb its power").... or perhaps it was due to electromagnetic fields. They are currently going through a process of elimination as they learn how exactly this works. However it worked, it worked well.

Fruit_music
Play that funky banana!

Various other workshops and exhibits ran throughout the day including working with a BBC Buggy and separately, Hedgehog Bots controlled by Arduino Nano and invented by Scott and Joe, graduates from University of Bristol. There was also a horizontal scrolling game controlled by a device one wears that picks up on electrical activity in the brain; you moved up by thinking more and down by thinking less... it was important to not actively think about thinking less. Sadly I forgot to get a photo of these great projects.

Saving the best to last there was Josh who presented an AWESOME Persistence of Vision project. Several rows of LEDs spinning at about 1000RPM (I think that was the speed...) He had animations running, could draw and persist a cube or the time and all sorts of other patterns. It looked great, was a tidy build and captivated us all like moths to a light bulb.

Persistance_Of_Vision_PoV_Josh
Must...not...look...at...the...lights. Oooooh shiny.

Digimakers has again lived up to expectations with Caroline and the team keeping everything running smoothly throughout the day.

The next event is currently scheduled for June, hope to see you there.
Comments

The MagPi: back in print

TheMagPi_i36_cover
Issue 36 of The MagPi magazine was recently released as a downloadable PDF. What ticks the awesome box though is that from this issue onwards the magazine is again available in print.

I worked on ~25 of the first 30 issues of The MagPi writing articles, proof reading and undertaking layout, and recall the fantastic feeling of seeing the magazine printed (thanks especially to Ian McAlpine). With options to purchase from several online Raspberry Pi sites as well as three Kickstarter bundles (including binder) the obvious missing link was high street distribution. The MagPi has now been under the wing of the Raspberry Pi Foundation for six issues and Issue 36 is the first to be available in the high street.

I'll say that again, with emphasis: in the high street.

It takes an incredible effort to launch a new magazine and arrange for distribution to WH Smith and similar. A HUGE well done to Russell Barnes, magazine editor and the rest of the team.

With the magazine back in print what is it like?

Firstly, the print quality is exceptionally high. The front cover has a joint matt-gloss effect with the title, most of the text and the Minecraft Splat elements in gloss on a light blue background. The cover paper used is also a fairly heavy stock and will survive some bashing (as I discovered when the magazine became an inadvertent fly swat the other day). Internally each page is full-colour and exceptionally clear and easy to read. This feels like a professional magazine in one's hand because, well, it is a professional magazine. Russell and co really know their stuff.

TheMagPi_i36_internal
With an increase in size to 100 pages the spine is thick enough that the magazine can sit on a bookshelf and the identity of each issue be determined from the spine. This does show the one drawback to a magazine of this thickness in that the pages will not lie flat. It's not a big problem, but it does mean that when following code tutorials with the magazine on your desk the pages tend to curve. Firmly (but not forcefully) pressing on the magazine once or twice will open up the pages further without damaging the spine.

TheMagPi_i36_contents
Yes, you did read the above paragraph correctly: 100 pages. This is the largest normal (i.e.: excluding Special Edition 1) issue of the magazine yet. Russell and his team have produced an absolutely fantastic publication with numerous hardware and software tutorials, reviews and features. A quick flip through finds 11 pages of adverts (including three asking people to consider subscribing) which I feel is reasonable for a magazine of this size (and the adverts are all Pi-relevant). Personal favourites in this issue include Extra Lives talking about retro gaming and the book review pages as these cover not only Pi-specific books, but also books of related interest. This issue a column of the book reviews pages is devoted to security and penetration testing which is an incredibly interesting subject.

The tutorials cater for all ability levels with a straightforward LED exercise in Python on page 23 at one end of the spectrum and applying physical forces to Python games to model gravity on pages 58 to 63. This is a very clever bit of code that models the movement of spheres, or celestial bodies (think: planets and asteroids). My favourite quote in the whole issue is found here:

For each planet we have, we want to calculate its effect on every other planet


That's a tough ask! Fortunately the article goes into exquisite detail on both the maths and programming needed to accomplish this.

One downside of print though is that if errors creep in then they are irreversible (unless a new print run is undertaken). Before printing The MagPi Volume 1-3 bundles we went back through every single page to update the content for the B+ (which had not been released when we wrote the earliest issues) and to correct any errors we had subsequently found for just this reason. With The MagPi issue 36, as with every magazine, a few gremlin have made it through the editing process and hence have, in print at least, become irreversible. Take the LED article on page 23 for example. The instructions and diagram show to connect to GPIO4 and GND, but the photo shows GPIO3 and +3V3 being used. Likewise, the code listing stated to use GPIO.BOARD but the pinout diagram for the Pi is numbered for GPIO.BCM. As an introductory article, "Get started with Raspberry Pi", errors like this may confuse the reader.

Despite the occasional gremlin the overall quality of the content is first rate. A lot of effort has clearly been put into the magazine. Whether you find reading easier in print or in an electronic format is a very personal thing, and with The MagPi available in both you can take advantage of both, for example: a print magazine that you can search for text within.

The new look MagPi magazine looks great, feels great and has the superb content we all expect from the publication. Best of all, the print edition is now available for a reduced price to subscribers.

Highly recommended.
Comments

PiConfig arrives

Another Kickstarter I’ve been eagerly awaiting recently is PiConfig. Described as the easiest way to set up networking on the Pi this USB stick of wonders looks just the business for when I need to rapidly reconfigure up to 6 Pis on my stand at Bristol Digimakers. Essentially the problem I face is that at home my Pis mostly use DHCP where-as at Digimakers I set up a LAN requiring them all to be on static IP addresses. It’s a real faff having to eject 3 SD and 3 micro SD cards, find the adaptor, and then one by one edit config.txt on a laptop, while trying to remember which card came from which Pi (note to self: must get out label printer).

Much easier, then, just to load PiConfig on my Mac, select the config setting for each Pi in turn, write to the Pi and hey presto all is done. This saves some time going from DHCP -> static (selecting the profile needed each time) and a whole lot when going from static -> DHCP (one config to rule them all). In both directions it wholly eliminates errors as well in the configuration of any card. Win.

So, the question is: does it work? The answer is yes, albeit with a few errors along the way.

The first thing one has to do is run an installer script on each Pi. For some unknown reason this is not included on the USB stick and instead wget must be used. This is a shame as the stick is 4GB in capacity and over 4GB is available. Hang on, that can’t be right:
piconfig_size
Well, apparently it is. Seeing this makes me wonder about the quality of the USB stick from years of reading “is your SD card genuine?” blog posts. Hence, best to take a backup of the software just to be sure. Weirdly this is where I encountered my first problem: right-clicking on the piconfig folder on the stick and choosing Compress “piconfig” results in an error that one file or the other could not be added to the archive. Hmm, this is now getting concerning. I confirmed the stick isn’t write only by creating a new folder on it temporarily.

Instead I copied the whole piconfig folder to my desktop and archived it to a zip there without problem.

Running the PiConfig application presented the program with pretty obvious fields to be completed. But, here again I found a problem: after entering configuration data for the LAN and clicking on the save profile button I was presented with an error, my profiles all disappeared and I had to quit and reload the program again to see them.

[UPDATED]Fortunately the Developer, Mihaly Krich responded very quickly to a message I sent him detailing this fault and has released an updated version of the PiConfig software that addresses this. With the update installed this bug is fixed. On Mac OS X 10.10 when you download the update and run the program you will be presented with a message telling you that only applications from known developers and the App Store can be run. This is a security feature on Mac OS X. To add PiConfig to the list of applications that can be run:

  1. Open System Preferences (quick shortcut: CMD+Space to open Spotlight, type pref and press enter)
  2. Click Security and Privacy
  3. Unlock the preference page by clicking on the padlock in the bottom left and entering your password
  4. Ensure that under Allow apps downloaded from you have “Mac App Store and identified developers”
  5. You will see that the Preferences already identifies that piconfig.app was the last application to need such permissions - click Open Anyway to add it to the safe application list.

piconfig_mac_preference_security

The software author has chosen to save my configuration data inside the Mac OS X package. This means that when when a patched version is released care must be taken to extract the files from /Volumes/PICONFIG/piconfig/osx/piconfig.app/Contents/Resources before replacing the application, else one’s configuration will be overwritten. On Twitter he noted to me that this was due to requiring sudo permissions in Mac OS X which the application does not have.

This save location prohibits the dual-use of any configuration file in Windows and Mac OS X as Windows computers cannot see inside Mac application packages (Mac apps are stored inside.app directories that masquerade as executable programs). I confirmed this to be a problem by loading the Windows executable on a different laptop and sure enough my previous profiles were nowhere to be seen. Given the profiles are saved to a plain text file, a format readable by both operating systems this should have been made accessible to both Mac OS X and Windows.

The final test though is: does this thing work? I can report that yes it does, and very well indeed. As advertised one of my Raspberry Pis is now on a static IP from DHCP... and now it is back to DHCP again after using PiConfig for a second time. Once over the configuration hurdles you can swap configuration on any Pi reliably in well under a minute.

Ultimately, despite less than ideal software and a USB stick that raises a Mr Spock eyebrow, PiConfig does do what it says on the tin and will be an essential part of my exhibition toolkit from now on. It saves me time and makes event setups that much easier.

4/5 “almost spot-on”
Comments