Our debut hackathon

We truly went into the unknown when we hacked away White Rabbit at the Readmill hackathon last month. It was our first hackathon; we neither knew what to expect nor what was expected of us. I had been sitting on this app idea for over a year now and it was the perfect opportunity to get it started.

Acknowledging my own failures, I do value punctuality and driving long distances in Australia comes as no surprise. So I found myself often querying Google maps to figure out how long it would take me to get places. From there, figuring out at what time I should leave was not rocket science, but I also had to be conscious of when that time came. So, the app was simple, it would tell you exactly when you had to leave to make it to your next appointment on time.

The hack in a nutshell

We started by looking into the Google Maps API, sorting through the wealth of info it returns and figuring out how to test the app with the Android emulator. After some time, we could read our calendar events, synced from our Gmail calendar and figure out how far the event was and thus when to leave to get there on time. We were pretty happy but the hack didn’t look like an app. Not yet.

The rest of the early morning was spent digging up the Internet for some cool icons, beautifying the app and most importantly running the app as a service. The idea being, that a cute rabbit would pop up in your status bar whenever it was time to leave for your next appointment.

The screenshots

So, 20 hours later, the sun rose and ta-da, we had our hack fit for demo. With the ability to turn notifications on or off, the app only consisted of one simple screen and would then just run in the background.

Whenever it was time to leave, a notification would pop up in the status bar and it could be expanded to show the event in details.

The upshot

A rewarding weekend filled with caffeine, ping pong, pizzas and falafel.

Where are we now?

You might have picked up that the app name changed somewhere along the way; please read on.

It surprised us that Google had not implemented such a feature, given that all the data was there. This was back before Google Now came out last year and we had not revisited our thoughts for a while. We do know now that Google has indeed implemented this feature.

We also found out that there existed a White Rabbit app on Google Play released in December last year. It was most disappointing that the existing White Rabbit app didn’t make use of any graphics or reference to Alice in wonderland. We never thought that White Rabbit was this unique concept or unique name for that matter, but still heart-brokenly renamed our app “I’m late I’m late!”.

Nevertheless, I just wanted to dedicate this blog entry to our app that will remain for us, our first hack.

The repo is up on Github here.

Learning from Mistakes

Recently our Android app; Open Secret Santa received a user review that was actually useful.

That in of itself is probably newsworthy, if you’ve ever had the misfortune of attempting to track down bugs from single sentence problem descriptions containing nothing more than some vague indication that something didn’t work and a brief treatise on how your app sucks, then you’ll understand what getting value from user reviews on Google Play is like.

In this case, the review provided a complete description of the failure scenario, which in lead in turn to the uncovering of a bug, how testing was inadequate and a rethink about how I should have written some of our code to make it more testable in the first place.

As explained in my previous post, the Draw Engine is a library I wrote that is responsible for creating Secret Santa draws using a mapping between participants and the other participants to whom they restricted from giving. Refer to the Draw Engine source code on GitHub.

The review reported -

The program looks good, but when I tried to use this for our family it said it couldn’t draw.

Two married grandparents, two daughters and sons-in-law and two children for each daughter. I restricted spouses from giving to each other. I also restricted the grand children from giving to their siblings or their own parents and the middle generation parents from giving to their own children.

Maybe a tough challenge, but everyone should have had at least six people they could give to.

The reviewer seemed to have a good point and given they’d put all this detail into a review I owed it to them to check it out. I tried the scenario using the app on my phone and it worked, hmm. I then wrote a quick unit test to verify the scenario programmatically in the Draw Engine in the hope that I could eliminate the possibility of a faulty Draw Engine and isolate the bug further. That test also passed. Hmm…

Then… I ran the “same” test again and it failed. My attention was then drawn to the code inside the Draw Engine that shuffles the members to faciliate redraws being in a new order.

Collections.shuffle(randomMembers, new Random());

It was then pretty obvious that the algorithm was failing due to the variation in the order the nodes were being visited, which was being determined by the Random object.

The code in the BasicDrawEngine was updated, but I was still surprised that I’d managed to miss the problem initially. After all, this was the part of the app I was most confident in! But, it was clear that I wasn’t actually able to exercise all the states and path of the algorithm by simply varying the input. I hadn’t and couldn’t test it properly.

It’s now reasonably obvious that the DrawEngine shouldn’t be responsible for the rerandomisation of the members. This should be performed externally – by the component that actually knows that it’s a redraw (ie in my Android app code). By doing this and providing the randomisation element to the DrawEngine – either explicitly (as a new Random() instance) or via the randomising of the members list order itself then the code can be tested more easily as the test can now better control the execution paths that were previously at the mercy of the Random object hidden deep inside the engine.

Interestingly enough, I recently stumbled upon an old-ish but relevant article that explains the principles I’ve just described, especially in the context of testability. The article (and my experiences!) certainly gave me a better understanding on how my method signature choices failed me and how designing for test pays off.

Most importantly; I can say that a Google Play review actually managed to help a developer find a problem – surely that’s a first!


ps- You can track the testing refactor here.