I’ve just published a companion  site for my free app Are You OK?.

The app is aimed at people wishing to regularly check the status of family or friends who may for example live alone and are vulnerable to accidents like a fall in their home, unable to call for help. Something like the reverse of a panic button system; if they don’t press a button every few hours, it sends an SMS message to selected contacts with a call to check in.

Head over to the website to read more about the app and find the download link.

When asking “should I use a Fragment or Activity?” it’s not always immediately obvious on how you should architect an app.

My advice is try to avoid a single “god” Activity (h/t Eric Burke) that manages navigation between tens of Fragments – it may seem to give you good control over transitions, but it gets messy quickly*.

My go to is always to use a combination of Activities and Fragments. So here are some tips:

  • If it’s a distinct part of an app (News, Settings, Write Post), use a new Activity. This Activity may be fairly light-weight, simply inflating a Fragment in its layout XML or in code.
  • For everything else use Fragments.
  • This gives you flexibility when combining Fragments in Activity layouts for tablet.
  • Create a BaseActivity class which handles setup/styling of ActionBar and SlidingDrawerLayout if you have that kind of navigation.
  • Nullify or customise the transitions between Activities if for example if you don’t want to have an obvious transition with an ActionBar that’s already in place (and you can make use of new L Activity transitions to smoothly transitions).
  • Fragments don’t need to be visual, an Activity can use the FragmentManager to create a persistent headless Fragment with setRetainInstance() who’s job may be to perform a background task (update, upload, refresh) – this means the user can rotate the device without destroying and recreating the Fragment, and is sometimes and alternative to binding to a Service onResume().

Some good sources for how to architect apps, as always the Google I/O Schedule app:


and Eric Burke’s 2012 talk, around half-way through:


*When does it get messy?

  • When dealing with deeper hierarchies, and with navigational requests that come from a user action within a Fragment.
  • When you need the ActionBar to be in overlay mode (for a full screen experience) but only in certain screens.
  • When you need to create new tasks (either shooting off to another app and back, or allowing other apps to start Activities in your app to do something like with a Share action)
  • There are many more, please feel free to add some in the comments if you can think of any.

There are two types of test I’ll describe below. First of all using Apple HLS streams, which is HTTP Live Streaming via port 80, supported by iOS and Safari, and also by Android (apps and browser). Then we have Adobe’s RTMP over port 1935, mostly used by Flash players on desktop, this covers browsers like Internet Explorer and Chrome on desktop. These tests apply to Wowza server but I think it’ll also cover Adobe Media Server.

All links to files and software mentioned are duplicated at the end of this post.

It’s worth noting that you can stick to HLS entirely by using an HLS plugin for Flash video players such as this one, and that is what we’re doing in order to make good use of Amazon’s CloudFront CDN.

For the purpose of testing you may also wish to simulate some live camera streams from static video files, see further down this post for info on how to do that on your computer, server or EC2.

Testing RTMP Live Streaming with Flazr

In this test we want to load test a Wowza origin server itself to see the direct effect of a lot of users on CPU load and RAM usage. This test is performed with Flazr, via RTMP on port 1935.

Assuming you’ve set up your Wowza or Adobe Media server already, for example by using a pre-built Wowza Amazon EC2 AMI. We’re using an m3.xlarge instance for this test as it has high network availabilty and a tonne of RAM – and we’re streaming 4 unique 720p ~4Mbit streams to it, transcoded to multiple SD and HD outputs (CPU use from this alone is up to 80%).

Installing flazr

First up, for the instance size and test configuration I’m using I modified flazr’s client.sh to increase the Java heap size to 8GB, otherwise you run out of RAM. Next up FTP and upload (or wget) flazr to a directory on your server/EC2 instance. Then SSH in and:

apt-get install default-java
cd path/to/flazr
chmod +x client.sh
./client.sh -host yourserver.com -version 00000000 -load 1000 -length 60000 -port 1935 -app yourAppName yourStreamName

The order of parameters does seem to matter in later versions of flazr, but either way this test runs for 60 seconds, with a load of 1000 viewers. Given all the transcoding our CPU was already feeling the pain, but there was no sign of trouble. We managed 4500 before anything started to stutter in our test player from another m3.xlarge instance.


Wowza CPU Usage

Of course this only matters if you are not using a CDN, but it’s good to know this EC2 instance can handle a lot of HD viewers.

Testing HLS Live Streaming (or a CDN such as Amazon CloudFront) with hlsprobe

Onto HLS streaming, the standard for mobile apps and sites. We have used Wowza CloudFront Formations to set up HLS caching for content delivery, so that we can handle a very large number of viewers without impacting on the CPU load or network throughput of the origin server, and to giver us greater redundancy. Given CloudFront works with HLS streams we are not using RTMP for this test, so we cannot use Flazr again. To test HLS consumption –that being the continuous download of .m3u8 files and their linked .ts video chunks– we can use a tool called hlsprobe, which written in python.

If you’re on a Mac and don’t have python I recommend you install it via brew to get up and running quickly. If you don’t have brew, get it here.

#on a mac
brew install python
#on ubuntu/amazon
sudo apt-get python

hlsprobe also relies on there being an SMTP server running, not that you need a fully functional one but:

#on mac
sudo postfix start
#on ec2, this auto-starts
sudo apt-get install postfix

Then to install hlsprobe’s dependencies and hlsprobe itself:

pip install m3u8
pip install PyYAML
git clone https://github.com/grafov/hlsprobe
cd hlsprobe

A sample config is linked at the end of the post.

Running hls probe is as simple as this (note the -v verbose mode, you can turn that off once you have it working).

python hlsprobe -v -c config.yaml

Now if you fire up the Wowza Engine Manager admin interface, you should still see the connection count and network traffic, but the traffic. If you’re testing your CDN such as with CloudFront, you should note that your CPU usage does not increase substantially as you add thousands of clients.

Simulating cameras to Wowza via nodeJS

It’s good to be able to simulate live streams at any time, either from your computer or in my case, from some EC2 instances. To do this I’ve written a simple nodejs script which loops a video, optionally transcoding as you go. I recommend against that due to high CPU use and therefore frame-loss; in my sample script I am passing through video and audio directly, the video is already using the correct codecs, frame size and bitrate via Handbrake.

The script runs ffmpeg, so you’ll need to install that first:

#on a mac
brew install ffmpeg
#on ubuntu/Amazon you'll have to to install/compile ffmpeg the usual way

Edit the js script to point to your server, port, and video file, the run the script with:

node fakestream.js

If the video completes, it’ll restart the stream but there will be a second of downtime, some video players automatically retry, but make sure your video is long enough for the test to be safe.

These are just a couple of ways of load testing a live streaming server, there are 3rd parties out there but we’ve not had great success so far, and this way you have a lot more control over the test environments.


fakestream.js – NodeJS script to simulate live streams
config.yaml – Sample config for hlsprobe
hlsprobe – Tool for testing HLS streams
Flazr – Tool for testing RTMP streams
OSMF-HLS – OSMF HLS Plugin to support HLS in Flash video players

If you use the excellent Postman for testing and developing your APIs (and if you don’t yet, please give it a try!) you may find this little node script helpful when generating documentation.

It simply converts your downloaded Postman collection file to HTML (with tables) for inserting into documentation or sharing with a 3rd party developer. The Postman collection is perfect for sharing with developers as it remains close to “live documentation”, but sometimes you need a more readable form.


I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the information on the file’s location and its data can be imported. Specifically I wanted this to happen when the user opened an email attachment, as data is shared between users via email attachment for this app.

There are many pitfalls to doing this, and the Stack Overflow answers I saw given for the question had various side-effects or problems. The most common was that your app would appear in the chooser dialog whenever the user clicked on an email notification, for any email – not just those with your attachment. After some trial and error, I came up with this method.

Create IntentFilters in AndroidManifest.xml

The first step is to add <intent-filter> nodes to the application node of the AndroidManifest.xml. Here’s an example of that:

  <action android:name="android.intent.action.VIEW" />
  <action android:name="android.intent.action.EDIT" />
  <category android:name="android.intent.category.DEFAULT" />
  <action android:name="android.intent.action.VIEW" />
  <action android:name="android.intent.action.EDIT" />
  <category android:name="android.intent.category.DEFAULT" />

Now something to note here, I’ve specified a filter for both “application/mytype” mimetype and also the more generic “application/octet-stream” mime type. The reason for this is because we can’t guarantee the attachment’s mime-type has been set correctly. We have iOS users and Android users sharing timers via email, and with iOS the mime type is set, with Android, at least in my tests on Android 4.2, the mime-type reverts to application/octet-stream for attachments sent from within the app.


I initially put these IntentFilters on the “home” Activity of my app, however I soon started encountering security exceptions in LogCat detailing how my Activity didn’t have access to the data from the other process (Gmail). I realised this was because my Activity’s tag had the launch mode set to:


Which prevents multiple instances of it being launched, this is important when users can launch the app from either the launcher icon or in this case via attachment (I didn’t want to have multiple instances of my home Activity running as that would confuse the user). So the solution was simply to create a new “ImportDataActivity” that handled the data import from the attachment, and then launched the home Activity with the Intent.FLAG_ACTIVITY_CLEAR_TOP flag added.

Importing Data

So in ImportDataActivity we need to import the data stored in the attachment, in my case this was JSON. The following shows how you might go about doing this:

protected void onCreate(Bundle savedInstanceState) {

  Uri data = getIntent().getData();
  if(data!=null) {
    try {
    } catch (Exception e) {
      // warn user about bad data here

  // launch home Activity (with FLAG_ACTIVITY_CLEAR_TOP) here…

private void importData(Uri data) {
  final String scheme = data.getScheme();

  if(ContentResolver.SCHEME_CONTENT.equals(scheme)) {
    try {
      ContentResolver cr = context.getContentResolver();
      InputStream is = cr.openInputStream(data);
      if(is == null) return;

      StringBuffer buf = new StringBuffer();			
      BufferedReader reader = new BufferedReader(new InputStreamReader(is));
      String str;
      if (is!=null) {							
        while ((str = reader.readLine()) != null) {	
          buf.append(str + "\n" );

      JSONObject json = new JSONObject(buf.toString());

      // perform your data import here…


That’s all that’s needed to register-for, and read data from custom file-types.

Sending Email with Attachments

Now how about sending an email with a custom attachment. Here’s a sample of how you might do that:

String recipient = "", 
  subject = "Sharing example", 
  message = "";

final Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND);

emailIntent.putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{recipient});
emailIntent.putExtra(android.content.Intent.EXTRA_SUBJECT, subject);
emailIntent.putExtra(android.content.Intent.EXTRA_TEXT, message);

// create attachment
String filename = "example.mytype";

File file = new File(getExternalCacheDir(), filename);
FileOutputStream fos = new FileOutputStream(file);
byte[] bytes = json.toString().getBytes();

if (!file.exists() || !file.canRead()) {
  Toast.makeText(this, "Problem creating attachment", 

Uri uri = Uri.parse("file://" + file.getAbsolutePath());
emailIntent.putExtra(Intent.EXTRA_STREAM, uri);

        "Email custom data using..."), 

Please note that “REQUEST_SHARE_DATA” is just an static int const in the class, used in onActivityResult() when the user returns from sending the email. This code will prompt the user to select an email client if they have multiple apps installed.

As always, please do point out any inaccuracies or improvements in the comments.

The latest Android app I’ve been working for Runloop, the hugely successful iOS interval timer Seconds Pro, is now live. Packed with the following features:

• Quickly create timers for interval training, tabata, circuit training
• Save your timers, as many as you need
• Organize Timers into groups
• Text to speech
• Install timers from the timer repository
• Send your timers to your friends
• Full control over every interval
• Assign music to intervals or timers
• Large display
• The choice of personal trainers up and down the country


You can download the app now from the Google Play Store.

If you’re looking for high quality Android development, head over to my company’s website – Valis Interactive.

A tutorial I wrote for .NET Magazine is now up on their site. This tutorial takes you through the basics of getting NFC working with Android 4.0+ with a “Top Trumps” like demo. It covers both reading and writing data to/from NFC tags, stickers or cards.


Head over to .NET Magazine to read the tutorial!

We had what was probably the first BBQ weather of the year over the weekend, but I wouldn’t know about that. Instead I spent the time coding away at the NFC Hackathon (sponsored by O2) with my fellow team members George Medve and Aaron Newton.

The idea was to spend 28 hours designing and coding something that made use of NFC (Near Field Communication). We were supplied with NFC enabled Galaxy S2s and some useful SDKs from Proxama and BlueVia for tracking NFC campaigns, making payments and tracking users.

We spent the night before the event thinking about just what we could do that was new. Even at this fledgling stage it felt as if everything had been done in some way already, we needed something unique. One idea we explored was transforming shopping by allowing customers to scan NFC price stickers in the many aisles instead of at the till; simply weighing in their shopping at the self checkout (to reduce unpaid bagging) and scanning their phone to transfer the shopping list and payment. Fortunately we didn’t go with this idea as another team at the event did (albeit without the weighing part).

At some point that night another idea came to me, “StreetScreen“. We could allow retailers and advertisers to directly interact with customers by using an NFC sticker in shop windows to initiate a connection between the screen and the phone, and with a multi-user server allow the customer to control the screen in real time.

At the event we used technologies like node.js, HTML, JavaScript and Flash to create some demos including browsing and rotating products, buying flowers for mother’s day and even a 2 player game (see “Connect 4″ game pictured). I’m pleased to say with this we won the Finance category prize sponsored by Visa. Thanks to the truly excellent Isobar and sponsors, the whole event was a lot of fun and I think I’ll be looking forward to developing a product around NFC in the near future.

The potential applications for this technology are endless. The number of NFC enabled handsets is expected to reach 1 in 6 by 2014, but that’s not going to stop us pushing the envelope in the meantime.

If you are interested in using this technology in your campaign please get in touch via the contact form. You can read more about the event over at the Isobar site.

My latest Android project is now live. This app for FanChants.com provides access to the 20,000 real football chants as sung by fans all over the world. Chants include lyrics and through an in-app-purchase chants can be set as your phone’s ringtone.


View FanChants over at Google Play

I’m pleased to announce a game we’ve been working on is now out. A collaboration between The Creation Agency and Bitmode (my previous home), we bring you The Great Snowball Fight!

Snowball Fight

The game is played over Google Maps, launching virtual snowballs at unsuspecting players in order to rank up, earn points and even win prizes from retailers you hit. You can also add buddies, connect via Facebook and receive special powerups.

Utilising Flash with AIR 3 and native extensions, we were able to build a game for iOS, Android and also PC. The game uses native extensions for deeper platform integration, such as the compass sensor or push notifications, as well as GPS to pin-point your location.

Head on over to the site to download the game and get throwing some snowballs!

Update: See comments for iOS compass extension source.