Reid Carlberg

Connected Devices, salesforce.com & Other Adventures

Control Your Quadcopter Drone Fleet with Salesforce1

6

Let’s say — hypothetically —  your VP of Drone Fleet Operations just asked you to help her handle drone management, route planning, payload optimization and more. What do you do? Well, there’s a few approaches to tackling the problem. Approach #1 is all about controlling drones using the Salesforce1 Mobile app. That’s what I’m going to talk about today. Note that all of this is done with a free Developer Edition and a little code.

Although I won’t cover it here, there’s also a mildly entertaining yet entirely impractical YouTube artifact documenting my adventures at ThingMonk where together with the excellent Darach Ennis we were able to launch a quadcopter using a coffeepot.

Equipment & Architecture

Let’s start by looking at the equipment you’ll need. The first thing is a quadcopter or two.  I used the Parrot AR Drone 2.0 available at pretty much every retailer worth their weight in salt. The Parrot is great for a lot of reasons, but first and foremost is that it has a great API. Where you have an API, great things are possible, right? Now, the Parrot is also a toy, so you production minded folks will probably want to upgrade to something more robust.

ardrone

The way the AR Drone works out of the box is that it creates a WiFi hotspot. You then connect your controlling device to that AR Drone hotspot and operate it. Parrot sets you up with an app that runs on either an iOS or Android device. I’ve controlled them from both platforms and it works great. The default AR Drone configuration requires one controller per drone, and it requires that controller to be on the WiFi network provided by the drone. If you have two drones, they are isolated from each other by their network connections and there’s no interaction.  

ardrone.standardconfig

In order for this to work with Salesforce, and in order to control multiple drones at the same time, we have to somehow unify these devices, which using the out of the box configuration means the controller needs to bridge multiple networks. My goto local interface box is typically the Raspberry Pi, and, fortunately, the Raspberry Pi is capable of supporting multiple network interfaces, which means it can also handle multiple network addresses. There are a few ways you could configure this, but I chose to use a single Raspberry Pi as a bridge between Salesforce and two other Raspberry Pi’s which connect to the AR Drones.  It looks a little like this:

ARDRone.Salesforce.v2

Now all you need is an app to handle the interface to the AR Drone. There are a lot of great ways to do this, and for this example I have used CylonJS and their ARDrone driver.  (You might remember Cylon JS from the Dreamforce Connected Device Lab.  They do tons of great stuff in the robotics world and have cool libraries for JavaScript, Ruby and Go.)

Control via the Streaming API Pattern

My first approach on this project is to use the familiar Streaming API  Pattern. (See Controlling Philips Hue from Salesforce1 for another example, or get started with a really simple streaming API example.) The Drone Gateway connects to Salesforce, listens to a Streaming API Push Topic and then forwards those instructions to a device as soon as they’re received.

On the Salesforce side of the house, we have to create a simple way to generate listenable data. This is easier than it sounds. The first thing we want is an sObject to store the data in. I’m re-using an existing object pattern I’ve used for other message driven work, I call it “Drone Message.”  The two key pieces of data it stores are the Address and Message. You can see in the screen capture that this one is setting the “land” message to “D2″.

Drone_Message__DRONE-00143___salesforce_com_-_Developer_Edition-2

You can of course use the default UI to create the records, but that then requires you to know addresses and message codes. Since code handles those kind of details better than my brain does, I created a simple Apex class to create these records.

public class DroneController{
    public void droneOneTakeoff() {
        insertMessage('D1','takeoff');
    }

    public void droneOneLand() {
        insertMessage('D1','land');
    }

    public void droneTwoTakeoff() {
		insertMessage('D2', 'takeoff');        
    }

    public void droneTwoLand() {
        insertMessage('D2', 'land');
    }

    public void insertMessage(String address, String message) {
        Drone_Message__c m = new Drone_Message__c();
        m.Address__c = address;
        m.Message__c = message;
        insert m;
    }

}

And now all I need is a little Visualforce code to extend this out to the UI layer. Note that this Visualforce page is tied to the Apex code above using the controller attribute.

<apex:page controller="DroneController" standardStylesheets="false" showHeader="false">
<style>
h1 {
font-family: sans-serif;
}

.large {
font-family: sans-serif;
font-size: 18pt;
}
</style>
<h1>Drone Controller</h1>
<apex:form >
<h1>Drone 1</h1>
<p><apex:commandButton action="{!droneOneTakeoff}" value="Takeoff" styleClass="large" />&nbsp;&nbsp;
<apex:commandButton action="{!droneOneLand}" value="Land" styleClass="large"/></p>

<h1>Drone 2</h1>
<p><apex:commandButton action="{!droneTwoTakeoff}" value="Takeoff" styleClass="large"/>&nbsp;&nbsp;
<apex:commandButton action="{!droneTwoLand}" value="Land" styleClass="large" /></p>

</apex:form>
</apex:page>

Now you need to make this Visualforce page available for mobile apps, create a tab for it and finally customize your mobile navigation options.  These are all just a few clicks, so check out the links if you’ve never done it before — pretty easy. Out in the world of humans, this renders as a very simple page, the same one that you saw in the video clip above.

DroneControllerInterface-one

Now that we have a quick and easy way to create listenable messages, let’s take a quick look at the Drone Gateway that’s doing this listening. This is a pattern I’ve re-used a few times, so you might be familiar with it. The gateway authenticates, begins listening to a Streaming API Push Topic, and then handles whatever it receives.  I chose to write this in Node.js and the code is pretty simple. The connect to Salesforce is detailed in the Philips Hue article, so I’ll just show you how it handles the message. Note the “address” and “message” arguments.

function handleMessage(address, message) {
	console.log("address: " + address);
	console.log("message: " + message);
	if (address == 'D1') {
		if (message == 'takeoff') {
			console.log("in d1 takeoff");
			handleRequest("http://10.0.0.2:1337/start");
		} else if (message == 'land') {
			console.log("in d1 land");
			handleRequest("http://10.0.0.2:1337/stop");
		}
	} else if (address == "D2") {
                if (message == 'takeoff') {
                        console.log("in d2 takeoff");
			handleRequest("http://10.0.0.3:1337/start");
                } else if (message == 'land') {
                        console.log("in d2 land");
			handleRequest("http://10.0.0.3:1337/stop");
                }		
	}
}

Now, you will have no doubt noticed that the above code is doing nothing more than making a call to a webserver. When I was testing, I decided that an http based interface would also be fun, so I created a small server that simply responds to two URLs: start and stop. You can see that these map to the CylonJS commands for “takeoff” and “land”.

http.createServer(function(req,res) {
	if (req.url == "/start") {
		console.log("ready to start");
		copter1.connections['ardrone'].takeoff();
	} else if (req.url == "/stop") {
		console.log("ready to stop");
		copter1.connections['ardrone'].land();
	}
	res.writeHead(200, {'Content-Type': 'text/plain'});
	res.end('Executed');
}).listen(1337, "10.0.0.2");

And there you have it. The start to finish message flow now looks like this:

  1. User presses takeoff on their mobile device.
  2. Salesforce1 inserts a Drone Message object for takeoff.
  3. Streaming API picks up the new records, forwards to listeners.
  4. The Node.js based Drone Gateway catches the new record, and sends it to the right address.
  5. The Node.js based Drone Server sends the specific command to the AR Drone.

Code notes and links:

My command center for the video shoot looks a bit more complicated, but it follows the diagrams above.  Note the three Raspberry Pi’s and two network hubs on the lower left.

Control-Multiple-AR-Drones-from-Salesforce1.jpg

Wrap Up

As you can see from the video, it’s pretty easy to get the drones to follow some simple instructions. The primary challenge with this method is the inherent lag between when an instruction is issued and when it gets to the drone.  This lag depends on a huge number of factors — Internet connection, gateway device performance, Streaming API performance, etc — but the end result is the same.  A drone moving at 5-6 meters per second will be in a completely different place by the time it responds to a delayed command.

An interesting experiment that raises a lot of questions for me.  First and foremost, what is the best way to spread device intelligence out among the components of a system?  Which is to say, what kind of work should Salesforce play in this kind of complicated interaction?  My overall feeling is that this, while interesting, is lower level than truly practical today.

Any Data Interesting Enough to Gather is Valuable Enough to Sell

1

This sponsored article over on Gizmodo is a little FUDdy but it asks what I think is going to be an increasingly difficult question to answer: who has dominion over the data I emit?

In the case of a regular human, a typical IoT device user will passively generate massive amounts of real time data.  This is exactly what we regular humans want.  The more predictably this data is generated, the less conscious effort it takes, the more useful it will be for later analysis.

However, already today, data goes any number of places.  It can go the obvious places, and then it can live on in a much broader context via distribution to 3rd parties.  After all, any data interesting enough to gather is valuable enough to sell.

DataEmission_RegularHuman_2014_v4

This situation presents three risk areas.  The first is the primary service provider (phone, device or system), second is any third parties you authorize via terms and conditions and last is add-on capability providers, like apps, which might also leak to third parties.  Are these really “risks”?  Well depends on who you are and what you value.

Now this is just for regular humans.  Imagine the complex web of risks a modern business faces adopting IoT solutions at scale.  The good part for businesses is that many business providers address this confidentiality concern contractually as a core part of their business.  If you are a business owner, you should clearly understand where the data you generate goes and what that means for your business.

Back on the regular human side, it’s another story.  Facebook and Twitter have demonstrated clearly that regular humans are more interested in short term functional capabilities than long term data re-use and analysis.  It’s therefore tempting to call for government regulation to address the issue. That might work in the EU but it has a snowball’s chance of passing in the US. In the US, I’m not sure what the right thing to do is.  It might be our only option is third party services that monitors our data and alert us when something goes awry, but at that point it feels like the chicken has flown the coop.  What would be super useful is some sort of labeling that reminds us what’s happening, a human readable privacy policy, but even that feels a little late.

Maybe there’s room for something similar to “organic” labeling or MPAA ratings, a third party evaluation that regularly inspects a service providers business and contracts, and then awards a regulated label based on what it finds.

Interesting times for sure!

Three Roomba Performance Tips and Two Trouble Spots

0

As a cohabitant of two good sized dogs and a couple of kids, I am both thrilled and disgusted each and every time I empty my Roomba.  The initial, “Wow! Look at everything it picked up!” is inevitably followed by, “How could there be so much dirt?!”

roomba-595-pet-series

I have the Roomba 595 model they sell at Costco for $299 which my kid named “Bob.”  Totally worth it.  I’ve had to make a couple of small modifications to ensure everything works as I would like, so I thought I would share them.

First, make sure your cleaning area is Roomba friendly. Tuck cords and power strips away, ensure the things it might run into are stable enough to stay standing when hit and be sure there are Roomba-width access paths to the areas you want cleaned.  Bob has pulled down and broken my wife’s alarm clock and knocked down wooden gates that were leaning peacefully against a wall.  Not the end of the world.

Second, if you have black flooring, know ahead of time that the Roomba’s excellent and self-preserving edge detection will mistake these for edges and NOT clean them. If you tape small white pieces of paper over the edge detectors, you can mark danger zones with virtual walls. This way you don’t send your Roomba tumbling down the stairs and it cleans your black carpet.  Bob is much happier cleaning our black-edged dining room rug with these in place.

Third, schedule it to run every day. Yes, you could run it once a week or every couple of days, but a daily schedule is the best way to ensure it’s doing its job. I have noticed that Bob will occasionally lose its schedule. I don’t yet know why, but I assume it somehow used up too much of its battery.

Trouble Spots

Here are a couple of areas where Bob gets stuck.  I haven’t figured out a way to fix these. Also, the black flooring fix I mention above doesn’t play well with these troublespots.  The Roomba goes into a cycle of saying “please move the Roomba to a new location” that can only be fixed by removing the edge detector covers, which you’ll then need to reapply in order to get it to start cleaning your black flooring again.

20140316-193151.jpg

20140316-193204.jpg

Those are my three tips. Hope they help you.

Questions, comments welcome. @ReidCarlberg

$20 Hardware Developer Kits Speed IoT Development

0

Texas Instruments just announced a $20 dev kit.

Designed to provide a quick start for engineers looking to begin exploring IoT design concepts, the Connected LaunchPad omits many of the options included in a full-featured development kit such as the $200 TI TM4C129x Connected Development Kit (Table). 

Very interesting to see all the major electronics companies coming out with their own accelerator kits.  Their goal is to a) make experimentation easier and b) help the speed to transformation from experiment to production ready.  ARM’s mbed is another super cool example of this and Intel keeps teasing about their IoT developer kit.

For me, it’s also interesting to think about how these stand up to boards like the Arduino.  Here’s the thing. The Arduino is ridiculously great for a lot of prototyping and small runs, but would you use it for thousands or millions of devices?  Probably not.

Bootcamp Bandwagon Comes to the World of IoT

0

Interesting to read about hardware accelerators, this one in Berlin from Deutsche Telekom. TechCrunch notes:

“Specifically, the telco is on the hunt for early-stage companies operating in the following three categories: smart home, consumer electronics and hardware; wearables and mobile; and B2B commercial applications.”

TC goes on to list a few more accelerators and startups.  These are great, but what can you really get done in three days?  Feels to me that it’s more like an investor meet and greet than anything else, not that there’s anything wrong with that.  I hope to hear more about the B2B Commercial Applications segment.  Not as sexy as consumer, but lots and lots of upside.

Also noteworthy: Deutsche Telekom’s hub:raum has locations in Berlin, Krakow and Tel Aviv, all with local programs for tech startups and all of which claim to be a version of Silicon Valley.  DT definitely has the money and technical chops to make great stuff happen, but I’m always skeptical when Location B claims to be the next SUPER AWESOME LOCATION EVERYONE ALREADY LOVES.

Create a Case in salesforce.com with a Staples Easy Button

4

I’ve mentioned a couple of times  it would be interesting if we could create a case in salesforce.com by pressing something like a Staples “Easy” button.  So I did it using a Raspberry Pi and an Easy Button.  Turns out, it’s not that hard. I did it with a standard Developer Edition and a couple of Node.js libraries.

Bill of Materials

Staples-Easy-Button-Raspberry-Pi.jpg

Approach

I started by following my standard Raspberry Pi configuration steps.  This created a simple environment where I could work with Node.js.

Next, I wanted to find the right library to help me work with the Raspberry Pi’s GPIO board. There are a few out there, and they all have one challenge: you can only use the GPIO if you are a superuser.  The most popular library appears to be “pi-gpio“.  It requires “gpio-admin” to work around the superuser requirement, but it didn’t seem to fail in a friendly way when I didn’t have gpio-admin, so I kept looking (note, this may not be pi-gpio’s fault).  I came across “GpiO“.  GpiO inherits the same superuser requirements, but if doesn’t require gpio-admin and offers some additional approaches, including gpio-admin.  I did a quick test, which succeeded, so I moved on.

Next, I wanted to review a bit about how to connect a button on a Raspberry Pi.  Now, buttons are pretty easy at their core.  You press one, and it completes a circuit.  But there’s always a little wiring required.  Adafruit has a great learning section which describes the basic hardware configuration in detail.  I also needed to double check how to configure an LED. Unsurprisingly, they have that as well.  My simple adaptation is below.  Note that I used a spare half-sized breadboard rather than the full-sized one that comes with the kit.  The button connects to pin 23, the LED to pin 24.

20140312-163526.jpg

The sample code Adafruit shows you is in Python.  Nothing wrong with Python.  However, I was thinking Node.js, which means I couldn’t use Adafruit’s example. No problem, I figured, there are great examples on the Github repos.

I then started to wonder about the best way to integrate with salesforce.com. There are a lot of ways, but I wanted to do the simplest thing that could possibly work and so landed on “Web to Case.”  Not a universally perfect approach, but it’s easy to get started (in fact, I simply used the default configuration), and it led me to the useful request library which you’ll see I’ve used below.

My final code is pretty simple.  I import a couple of libraries, declare a couple of gpio variables for specific pins, and have a request with form data that’s fired off.

var gpio = require("gpio");
var request = require("request");

function sendWebToCase() {
	var fields = {
			'orgid': '00DU0000000YYhQ',
			'name': 'Reidberry Carlberg',
			'email': 'reid.carlberg@gmail.com',
			'phone': '773-870-5554',
			'subject': 'Auto case submission',
			'description': 'Here is the detail -- we could also send a log'
			};

	var r = request.post('https://www.salesforce.com/servlet/servlet.WebToCase?encoding=UTF-8', 
			{ form: fields },
			function (error, response, body) {
				console.log(body);
			}
		);
}

function flashLed(led, state) {
	led.set(state);
	if (state == 1) {
		setInterval(function() { flashLed(led, 0); }, 5000);
	}
}

var gpio23 = gpio.export(23, {
   direction: "in",
   ready: function() {
   }
});
var gpio24 = gpio.export(24, {
   direction: "out",
   ready: function() {
   }
});

gpio23.on("change", function(val) {
   // value will report either 1 or 0 (number) when the value changes
   console.log("23" + val);
	if (val == 0) {
		sendWebToCase();
		flashLed(gpio24,1);
	}
});

That’s it!  All that was left was to modify the Easy button.

There are quite a few guides on line on how to hack the Easy button–turns out this has been popular the entire time it’s been around.  Most of them require some kind of soldering and circuit modification (example), which to my mind was unnecessary.  The button itself has several parts: a speaker, battery case, some weights, a piece of metal to make that clicking sound, a small circuit board and the actual button.

20140312-162021.jpg

The only think I really care about is the button.  As I said earlier, all a button does is complete a circuit, right? If that’s the case, I wondered, why couldn’t I just add a new circuit?  I started by covering the existing circuit with plain old tape, put on two new wires that would form the new circuit and taped them in place.  Voila!  The button is now mine, no soldering required.

20140312-162009.jpg

Now, about that gift box.  Turns out it is just about the perfect size to contain all these guts.  Just about.  I had to turn the Raspberry Pi case upside down,  stack the breadboard on top of it, and feed the power supply through a special hole I added, but it worked.

Conclusions, Questions & Opportunities

Overall, I’m glad I did this, but I’m not convinced I have landed on the perfect solution just yet. I find myself wondering if an Electric Imp wouldn’t be a better choice.  I also have an Arduino Yun that, with its smaller form factor, is an appealing option.  Finally, I wonder how useful the Web to Case approach really is.  It certainly creates Cases, which is the point, but it’s very one-way.  It would be super interesting to to have some sort of bi-directional communication so the button could display the status of the case, or so that the initial case creation could include a richer set of data. This also begs the question of whether cases should move to a completely data-driven predictive model requiring zero human intervention, but that my friends is a subject for another day.

Questions, comments — I’d love to hear em.  @ReidCarlberg

Salesforce Apex RSS Reader Featuring XMLStreamReader

0

For reasons I won’t go into (but which are probably obvious when you look at the code) I needed a simple way of importing RSS feeds into Salesforce. Although I have seen others, I decided to create a new one using XMLStreamReader that includes test coverage and which is installable via an unmanaged package.

Why XMLStreamReader instead of the oft used XMLNode? Well, in the case of RSS, XMLStreamReader is great because it’s a read-only interface, which is all I want to do. This keeps everything super fast and efficient, and pretty easy to test.

The heavy lifting is done in the BlogLog_RssReader class. As a developer, you simply send the raw text representation of the RSS XML into the read method, and it returns a List of Blog_Entry__c objects.

Here’s the meat:

    public List<Blog_Entry__c> read(String document) {
        List<Blog_Entry__c> ret = new List<Blog_Entry__c>();
        boolean isSafeToGetNextXmlElement = true;

        XmlStreamReader reader = new XmlStreamReader(document);

        while(isSafeToGetNextXmlElement) {
            if (reader.getEventType() == XmlTag.START_ELEMENT) {
                System.debug('^^^^' + reader.getLocalName());
                if ('item' == reader.getLocalName()) {
                    Blog_Entry__c item = parseItem(reader);
                    ret.add(item);
                }
            }
            // Always use hasNext() before calling next() to confirm 
            // that we have not reached the end of the stream
            if (reader.hasNext()) {
                reader.next();
            } else {
                isSafeToGetNextXmlElement = false;
                break;
            }
        }        

        return ret;
    }

    Blog_Entry__c parseItem(XmlStreamReader reader) {

        Blog_Entry__c ret = new Blog_Entry__c();
        boolean isSafeToGetNextXmlElement = true;

        while(isSafeToGetNextXmlElement) {
            if (reader.getEventType() == XmlTag.END_ELEMENT &&
               'item' == reader.getLocalName()) {
                isSafeToGetNextXmlElement = false;  
                   break;
            }
            if (reader.getEventType() == XmlTag.START_ELEMENT) {
                System.debug('****' + reader.getLocalName() + '~~~~' + reader.getNamespace());
                if ('title' == reader.getLocalName() && reader.getNamespace() == null) {
                        String title = parseString(reader);
                        ret.Title__c = title;
                }
                if ('link' == reader.getLocalName() && reader.getNamespace() == null) {
                        String link = parseString(reader);
                        ret.Link__c = link;
                }
                if ('origLink' == reader.getLocalName()) {
                        String link = parseString(reader);
                        ret.Link__c = link;
                }
                if ('creator' == reader.getLocalName() ) {
                        String author = parseString(reader);
                        ret.Author__c = author;
                } 
                if ('category' == reader.getLocalName() ) {
                        String category = parseString(reader);
                    if (ret.Category__c != null) {
                        ret.Category__c = ret.Category__c + ', ' + category;
                    } else {
                        ret.Category__c = category;
                    }
                    if (ret.Category__c.length() > 250) {
                        ret.Category__c = ret.Category__c.substring(0,249);
                    }
                }  
                if ('pubDate' == reader.getLocalName() ) {
                        String pubDate = parseString(reader);
                        ret.Published__c = convertRSSDateStringToDate(pubDate);
                } 
                if ('description' == reader.getLocalName() ) {
                        String description = parseString(reader);
                        ret.Lead_Copy__c = description;
                }                 

            }
            // Always use hasNext() before calling next() to confirm 
            // that we have not reached the end of the stream
            if (reader.hasNext()) {
                reader.next();
            } else {
                isSafeToGetNextXmlElement = false;
                break;
            }
        }        

        return ret;

    }

    String parseString(XmlStreamReader reader) {
        String ret = '';

        boolean isSafeToGetNextXmlElement = true;
        while(isSafeToGetNextXmlElement) {
            System.debug('****EVENTTYPE' + reader.getEventType());
            if (reader.getEventType() == XmlTag.END_ELEMENT) {
                break;
            } else if (reader.getEventType() == XmlTag.CHARACTERS) {
                System.debug('****Characters |' + reader.getText() + '|');
                ret = ret + reader.getText();
            } else if (reader.getEventType() == XmlTag.CDATA) {
                System.debug('****CDATA');
                ret = reader.getText();
            }
            // Always use hasNext() before calling next() to confirm 
            // that we have not reached the end of the stream
            if (reader.hasNext()) {
                reader.next();
            } else {
                isSafeToGetNextXmlElement = false;
                break;
            }
        }
        return ret.trim();
    }

You get the idea. Read method takes the whole document and looks for an item. ParseItem looks for the individual fields I care about, and ParseString gets the code out of those fields. Good times.

Code is on Github. As always, you can get a free Developer Edition with a few quick clicks.

Improvements? Comments? Let me know.

Follow

Get every new post delivered to your Inbox.

Join 3,940 other followers

%d bloggers like this: