Bridging the digital and physical
Created a release using Github’s new release tool. Not that this is polished enough to deserve a release, but it may make it more accessible and provide more feedback.
I attended a Emergent Design (Design Patterns + TDD) class by Net Objectives last week and decided to apply some of the things I learned to the project. Foremost being refactoring to the ‘open-closed’ (A concept I hadn’t heard of before, and feel I should have). Below are two dependency graphs from before my refactoring, and the most recent work (still needs a little work).
Obviously the number of classes have blossomed, but also the ability to easily integrate new actions without needing to write new methods into the Tag class. By breaking out functionality like the actions from the Tag, I’ve created a much more cohesive design. I’ve also begun using NSNotificationCenter for the Mirror to Tag communication, removing the need for the Mirror to know the structure or workings of the real Tag class. This also opens it up to other event types; I could have an action activated by a clock/timer just as easily.
I’ve made progress on redesigning this project to be more user friendly. I used the GUI component, Janus (formerly ‘Mirror Config’) as a base and added a menu bar icon (near the clock) and removed its dock icon. As a pseudo-daemon, it can monitor the reader’s input and react, as well as allowing the user to change the action for each tag, and pop up a ‘Name this tag’ window when an unknown tag is seen.
I’ve been working on a project over the last few months that I haven’t yet written about, but should. I’ve always had a thing for ubiquitous computing (integrating computers into daily life in a less conspicuous way), and that extends to RFID and NFC, and bridging the divide between the digital world and the physical world. There was a period in which a few different manufacturers were building NFC readers with the idea that people would put NFC stickers onto objects and then use their webservice to look up the appropriate action to take. Sadly, of the companies I know of that were trying this, none are still working in that space. I have purchased a few of the various devices available, and the latest one has had the most progress.
Before continueing about the device and the project, let me digress to provide an idea of why this is cool and interesting. Imagine that you went on a trip to Mexico, and while there, you visited some cool ruins, as well as a few outdoor markets. You bring back souvenirs, but over the years you’ve forgotten the specifics of where you got each one. You also took a bunch of pictures, but they’ve become effectively lost among all the other photos in your library. As an alternative, imagine that you placed small (quarter sized) stickers on the bottoms of some of the knick-knacks you bring back, and spent a little time associating the digital data with the physical objects. Years later, while discussing the trip, you pick one up, place it on a pad near your computer, and Google Earth pulls up a map of the city you bought it in, with a pin at the very market you were at. It also opened Picasa or iPhoto to the album of photos from that trip. That is the power that these NFC tags and readers can provide: a way to tie together the things we have with their associated data. This is a non-business example, but the same basic idea can be extended to associating physical project assets with their digital data counterparts (spreadsheets, specs, etc).
The device I’ve most recently received is a Violet Mir:ror. Sadly, I was only able to find them in bulk (no box or retail packaging) from a French website Planète-Domotique, and the shipping to the US is a little high. The upside is that the Mirror has one of the easiest to use interfaces of the NFC readers I’ve tried. It shows up as a USB Human Interface Device, much like keyboards and mice do, and the basic events for tag read/tag leave have been reverse engineered. It also has lights and sounds, but the only control known is how to disable them. The protocol for light and sound choreography, as well as reading/writing the contents of tags (assuming it can), hasn’t been reverse engineered yet. For my puposes, using the tag’s unique id to identify it and perform an action is sufficient.
Daemon and actions
I started with an Objective-C daemon that would recieve the tag ‘IN’ or ‘OUT’ event and then react accordingly. The first iteration performed like the tagEventor project and looked for a script named for the tagid and would execute it, or if none existed, would run a generic script. I expanded on this by writing a python program that would use a sqlite database to store tags and state information. Tags were added automatically, and could be given a friendlier name, have an action set, and other tables would store the length of time the tag was on the reader, and the number of times it had been seen that day (number of “IN” events for that tag id). Tags were given a default action of “speak”, which was a script that used OS X’s built in text to speech software to speak a simple phrase. I had previously spun the scripts I had created for tagEventor into their own repository, nfc-scripts since I had also tried nfc-eventd and written an adapter script, runTag so both daemons could run scripts the same way. I expanded on the nfc-scripts and had the following actions written:
- http: Open url defined in config
- music: start iTunes playing when tag goes in, stop when tag goes out
- addressbook: Open Address Book app when tag does in, close when tag goes out
- bugout: Close out a list of applications
- location: Open Google Earth to a lat/long from a config file
- randommovie.sh: Play a random movie from a hardcoded directory
- speak: Speak a simple phrase that includes the tag name, count, time seen
- LockComputer: Locks OS X desktop
There is a laundry list of other actions that I’d like to write scripts for, but I’ve held off until I’ve gotten a more stable suite of tools defined. Since setting a tag to perform an action required modifying the database, I also started a graphical configuration programmed called (Named for the greek god) Janus. This allows tags be named, actions to be selected, and has an incomplete feature to allow editing of configuration files for actions that require them. To explain: you may want to have multiple tags that open Google Earth, but each to a different location. The code to open Google Earth is the same, but each tag needs a unique configuration file that details the location that it should open.
Although the system works, it has a lot of moving parts and is quite complex. A next step in development will be to reevaluate and perhaps reengineer. The daemon could be moved from a background process into running in OS x’s menu bar (near the clock), which means it could also be combined with the Mirror Config. The Python program could be rewritten in Objective-C and merged into the daemon. The configuration for each tag could be moved into the database, making it more structured. Many of the actions are performed by very simple scripts, some of which aren’t very generic. They could be added to the database, or perhaps made into modules/plugins. A large part of this is balancing between having tags that can perform any task, and having tags that perform the tasks the user really wants/needs.