Applications of NFC Chips

Google recently announced the Nexus S phone, created in partnership with Samsung. This is the latest in the developer phone range, aimed at providing a reference device for the next wave of consumer Android devices running Android OS 2.3 (Gingerbread) an…

Google recently announced the Nexus S phone, created in partnership with Samsung. This is the latest in the developer phone range, aimed at providing a reference device for the next wave of consumer Android devices running Android OS 2.3 (Gingerbread) and higher.

One of the features of this phone is an NFC chip, which is capable of transmitting and reading data at a distance of up to 10cm. It is compatible with existing systems such as RFID tags: tiny, incredibly cheap slithers of componentry able to store information and be embedded in anything from food packaging to stickers.

We’ve not had long to think about the potential applications of wide-spread NFC usage, but I can see this breeding some fantastic new ways to use technology. Last night I visited the London Android group, and a few ideas came to mind on the trip home. These are some potentially common use-cases that we’ll see in the next few years…

PayPal / Visa / MasterCard

This will allow us to pay for goods without physical credit/debit cards, or even send a friend or eBay seller money. For in-store purchases equipment will be in place to swipe your phone against and let you acknowledge the payment, and for private transfers you’d simply fire up the PayPal app and type in the amount you wish to send. The NFC chip in the receiving phone can be “activated” passively by swiping the phones together, or opening the app could enable it for 10-15 seconds. Swipe it over your friend’s phone, their details appear on-screen and you hit “send”. The system will send the money to their registered account. The nice thing about this is that you don’t even need to know the person you’re paying, you could literally transfer money to someone you’ve never met, securely, on the street.

Bar Tab

An NFC chip is embedded or stuck to a table in a bar or restaurant. By swiping your phone you’ll be able to uniquely identify your table, placing orders, requesting service and ultimately taking your phone over to the bar to settle up via an NFC capable payment device such as Barclays PayWave found all over the UK. A white-label app could be used at multiple destinations, acting as central gatekeeper to the UIDs in order to also provide the order and payment systems so the phone owner doesn’t need to download an app per destination.

Social Gaming

There are too many possibilities to mention here. But the way that the Android Intents and Service systems works provides plenty of incredibly hassle-free ways to make use of tag “intents” combined with existing location-based social gaming. You swipe a tag or phone near another and the GameService registers the occasion to whatever ends your game needs. Stealth may even come into it, swiping a phone near another without them realising, alternatively enabling geo-caches with RFIDs for 1 player games.

Lots of fun to be had with NFC, any other suggestions off the top of your head?

Chrome Web Store: Why Online Apps?

Yesterday Google unveiled the Chrome Web Store. In a nutshell this is an App Store for the Chrome browser and a critical component in the upcoming Chrome OS. The Chrome browser is found on all major desktop operating systems, on the enormous numbers of A…

Yesterday Google unveiled the Chrome Web Store. In a nutshell this is an App Store for the Chrome browser and a critical component in the upcoming Chrome OS. The Chrome browser is found on all major desktop operating systems, on the enormous numbers of Android phones and tablets, and the new TVs and set top boxes from companies like Sony, Logitech and reportedly the biggest of them all, Samsung. Chrome OS is a desktop operating system replacement designed to operate entirely in the cloud, using web technologies, with almost negligible startup times for the instant-on, always connected generation. Chrome touts automatic synchronisation of everything from bookmarks, to auto-fill info and passwords, and preferences.

How do regular people use computers?

I must first apologise for the use of the word “regular” to differentiate users here, but the truth is there are a very small number power users; people that are even remotely interested in how their operating system works and how it can be modified. We manually install software onto a computer because it provides a fast experience, utilising the full power of the machine. The other camp includes everyone else; using Office, the Web (Facebook, webmail), maybe intranet applications at work, and of course shopping and having fun at home. These camps do overlap, but the key point here is the latter camp is the vast majority, and we must always look at technology from their perspective in order to see the bigger shifts.

In a previous post regarding web technologies I proposed that nearly all applications would be web-based within 5 years, I’m an eternal optimist so any numbers I give always need seasoning with a pinch of salt. For a professional programmer or designer this is something really hard to swallow, and the “why a web app store” comments are already proliferating the twitter-sphere. I’m always observing how my group of friends use computers, some have more skill than others, but they rarely install desktop applications any more and would appear to prefer not to have to as they feel comfortable having already learned how to live in browser-land.

The word computer in the heading is a little outdated, it’s already clear to most that computers are, for the most part, being used in the form of mobile phones, tablets and other devices, not laptops and desktops. This is also key, the inevitable decline of the laptop and the desktop, those specialist and indirect machines, from the computing landscape.

Native and offline, the 90% rule

Traditionally we’ve had to install applications to specific operating systems to make use of certain features, certain hardware. One example would be the ability to write files or a database to disk, these might contain a user’s data from several sessions – this feature alone is the decider for a large percentage of mobile apps. Another might be to make use of the graphics card to display massive amounts of 3D polygons for a game. Finally I think notifications are worth mentioning. I’ve had issues with online messengers, twitter clients, apps that are buried in a tab, unable to do more than flash the title bar to let you know *something’s happening!*.

The change that’s occurring is that new web technologies are bringing some of these native-only features to web developers, through HTML5 and Flash, even for the problem of notifications. This means that instead of only being able to produce 50% of the apps you could on desktop, you can now produce 90% and growing.

I like to call this the 90% rule. The added bonus is that the other 10% is typically what our “power users” need, so in effect it’s pretty much 100% of what our “regular” users need. That’s the critical mass required to make the shift away from traditional desktop operating systems, onto something new with many new benefits.

Benefits

There are potential pitfalls and challenges (security, limited connectivity), but also benefits associated with moving entirely to the cloud. I’d like to pick out a few key benefits over traditional computing models.

Backup

Fire, burglary, lost property, batteries a’sploding. In some cases you can lose all your devices in one foul swoop. You backup your computer right? I backup to Timemachine continually, I run a weekly secondary hard-drive backup, I use SVN for all my projects, and I use Dropbox with an encrypted DMG to make sure I always have some important information to hand. Regular computer users do not do this, then inevitably hard drives crash, drinks get spilled, I can’t even begin to count the number of times a non-geek, and even plenty of less paranoid geeks have simply lost everything bar the postage stamp sized photos they uploaded to Facebook. iTunes doesn’t let you re-download music anymore, you really have to back up. This was easier to deal with when people had physical backups, real photos, CDs, real letters, but it’s increasingly becoming an issue.

The immediate win here, admittedly at the cost of a trust relationship, comes from having all your stuff backed up by professionals with backups of their own across the globe. It would take a fairly major worldwide catastrophe before both the server and your local copies were destroyed.

Updates

Probably one of the original reasons apps started moving to web was that you can guarantee your users are all running the same version. From their perspective they don’t have to install anything, and they get updates and bug-fixes with zero effort. No checking for updates and waiting, no updater apps popping up every Wednesday because they changed the kerning in iTunes.

Migration Between Devices

With a great many devices at our disposal; the phones in our pockets, the tablet on the table, the laptop under the sofa, the PC in the back room, the TV in the living room and the watch on our wrist, we have so many overlapping choices in what we use to go online and do things. If each of these runs its own operating system, operates its own App Store, its own way of installing applications and games, having to pay for a copy of an app that only runs on one specific device you may lose or replace, we are simply limiting ourselves.

One of the things a unified web-based operating system does is turn upside-down the notion that you’re going to show someone that photo which is on your INSERT_DEVICE, instead you pick up any device and off you go. Ubiquitous computing, ultimate convenience.

This also makes sense when the inevitable happens, and that new shiny device comes out, you migrate to a new device. The experience on something like an Android phone can be pretty good, should you decide not to switch to another OS. You turn it on, enter your username and password, and all of your settings and apps are immediately re-downloaded. Apple provide a backup mechanism using a cable and desktop/laptop computer running a copy of iTunes (though that doesn’t solve the problem of a house fire where you lose both devices).

Chrome Web Store

Unfortunately Google have done the usual technology-driven thing and put out a rather functional experience for later finessing, rather than launching with a polished user experience as might another user-focused company, but it’s not too bad, certainly better than the Android market even in its present state.

So what do you think to Chrome Web Store and Chrome OS? Comments welcome as always.

Speaking: An Introduction to Android

I’ll be speaking at this month’s London Flash Platform User Group meeting (27th May) on the subject of native Android application development.The presentation will get you up and running from installing the tools to building and skinning applications…

I’ll be speaking at this month’s London Flash Platform User Group meeting (27th May) on the subject of native Android application development.

The presentation will get you up and running from installing the tools to building and skinning applications.

You can sign up to attend and find out more details here.

UPDATE: Recording here. (Volume is very low, so without external speakers you may have trouble hearing).

jQuery CSS3 3D Animation

I’ve just finished a jQuery extension which adds support for modifying and animating CSS3 transformations in 2D and 3D. This was based on the 2D transform monkey-patch by Zachary Johnson. I needed this for a project I’m working on which specificall…

Update: The code has now been updated to support jQuery 1.6+, thanks again to Zachstronaught. Please bear in mind the original date on the post below, there may be some inaccuracies due to new browser versions.

I’ve just finished a jQuery extension which adds support for modifying and animating CSS3 transformations in 2D and 3D. This was based on the 2D transform monkey-patch by Zachary Johnson.

I needed this for a project I’m working on which specifically targets Webkit (tablet devices), but I’m releasing the code under the existing MIT license for anyone to use as they wish. I’ve put together a little demo to show how it can be used. This demo has been tested on Safari and Chrome, in Firefox you’ll likely only see the 2D transformations, I haven’t tried IE.

DEMO

I had very little spare time to put this together so it’s rough around the edges, very basic, and doesn’t really show the full potential of this technique. But hopefully you’ll see that 3D transformations can be used in a subtle manner with your existing JS/CSS, or in a very obvious manner in a game perhaps.

Click image to see demo. Move mouse around to rotate images in 3D, roll-over buttons and click to view transform animations.

(images are CC non-commercial share-alike, link)

Notes

There has been quite some discussion regarding the position of “HTML5″ (usually referring to HTML5, CSS3 and JS), and Flash. Steve Jobs made his thoughts clear on the subject, even though that particular letter was full of inaccuracies and errors, in particular with regards to video and touch events. Personally I’ll always use whatever tech works for the job, many of the posts bashing Flash in the last decade have been written by people that haven’t necessarily tried it. Many refer to Flash from brief experiences they had many years ago, with the timeline, slow performance and pre-AS3 code. So I thought it might be useful to write up some of the notes I made along the way as I weighed up when I might use this.

CSS Animations/Transitions

So far it’s really only Webkit that supports these. Everything is optimised for fairly non-interactive content, animations are defined ahead of time, not very dynamic. You can do some cool stuff with keyframes (at 10%, 20% etc) but I see way too many holes if you want to use these in RIAs and games.

Of course using JavaScript you can pretty much animate things as you wish, which is why I wrote the jQuery extension to support the 3D CSS transformations in the animate() function. The downside is performance. Testing on the 1Ghz Nexus One webkit browser shows that JS powered CSS animation will be severely limited on devices, certainly when compared with performance tests shown for Flash Player 10.1.

I was unable to find a way to perform a circular path (orbit) animation in CSS, anyone know if this is possible? This is something very common in UI work, rather than sticking to straight lines, transitions can benefit from having a touch of curvature to soften the effect, as well as standard (non-mouse-related) “hover” type animation. So again in this case I had to resort to using JS for this, losing the benefit of CSS animations, if anyone knows how to do this I’d really appreciate the comment.

Filters

I can’t seem to find anything that works in Webkit or Mozilla, here I’m talking about the sorts of things you do to highlight something of importance to the user to subtly improve usability; things like glow and non-box-shaped shadows (I’m aware of text-shadows and box shadows). I would have thought this was a given when the spec was being written, this could make for a pretty dated look and feel.

Blend Modes

I really just expected these to be in… there are a few things in Webkit related to this, but it’s a worry; blend modes make for an improved look and feel in modern UIs (especially when dragging things around as objects become obscured). Will have to wait and see what happens with this. Of course all of this would be almost moot if the IE 9 team decided Canvas wasn’t too much of a threat and implemented support. So many great Canvas experiments are essentially in vain as a result, so many apps still only possible using the pixel manipulating features in Flash Player 8+.

CPU usage / Performance

I was using a jQuery plugin for animating elements on a curve/arc to get the circle animation but I found CPU usage went straight to 100% with the four animals, so I presume either my usage was wrong (there wasn’t much to it) or there’s something wrong with that plugin.

Safari uses 100% CPU just to run the setInterval() for the first 30 seconds before dropping down to <5%, Chrome doesn’t suffer from this. I’m not sure whether this is a bug in Safari or not, hopefully someone can shed some light on this. Outside of setInterval() there are no built in ways to do real-time games in JS that I am aware of (Flash has setInterval and Timers but it’s much more efficient to use the ENTER_FRAME event which results in no CPU overhead).

If you are doing real-time games in JS, I would probably avoid the overhead of jQuery for the most part and keep it as low-level as possible, the DOM with its history of laying out mostly static content doesn’t lend itself amazingly well to high-performance graphics, this is akin to building a game in the Flex (app) framework, you just wouldn’t, you’d use Sprite/MovieClip etc which doesn’t have the enormous measurement overheads for liquid layout, padding, margins, accessibility and so on. Perhaps Canvas is an option, which unfortunately still means Flash for the next few years (flashcanvas) due to IE9 (SVG could take up a large amount of slack, but that’s about equivalent to FP8 so not particularly exciting).

The other day I saw this video which appears to confirm my performance worries with HTML. It’s very easy to say HTML provides better performance than Flash until you try to do the same sort of things people do with Flash but I imagine the FUD will continue long after people start to question why we aren’t seeing these cutting edge apps being built with HTML.

Cross browser inconsistencies

I mentioned I’m only targeting Webkit, luckily the two big OSs in tablet devices (Apple iPhone OS and Android) both run Webkit, but I am a little worried by the fact that something as basic as text-stroke only works in Webkit, there’s nothing very concrete to go on as to who’s going to support what in the coming years. Even Firefox doesn’t support it, and that actually has a big impact on design as I’d also have to remove the (fairly supported) text drop-shadow because without the text stroke it just looks ugly. What a nightmare the next few years will be, with frustration and constant set-backs; I’ve gotten used to being 100% confident anything I can achieve in Flash will work across all browsers, the thought of hacking and rolling back to simpler times is a pain.

Conclusions

So this has been a learning experience… it certainly works for me when targeting mobile devices for fairly simple apps/games, where you don’t have a lot of heavy graphics or effects to deal with. I can definitely see a huge chunk of current-gen Flash web apps being cut-down a little and written in HTML+CSS+JS in order to support Apple devices and in favour of Web standards, but I fear if Flash becomes too niche we’re going to take one step forward and two steps back just as web apps and games were really beginning to rival the desktop; leaving us with a rather uninspiring experience based on what you simply can’t do in HTML5/JS/CSS3. In short it’s only about 60% of the way to Flash Player 10 in raw technical capabilities and before widespread use we’ll see Flash Player 11, 12, with who knows what.

So it’s not quite as great as I hoped, but I’m gonna jump on board as best I can and push it as far as I’m able. Still, what I’d really love to see is browser vendors pushed in the right direction by us developers demanding some of these things as soon as possible.

Implementing SpellCheck (Squiggly) with the Text Layout Framework (TLF)

I’ve just posted over in the Text Layout forums how I went about implementing Squiggly with “pure” Text Layout Framework… so that’s not using TLF/FTETextField or the Spark components.This is really just an overview which should give plenty to help yo…

I’ve just posted over in the Text Layout forums how I went about implementing Squiggly with “pure” Text Layout Framework… so that’s not using TLF/FTETextField or the Spark components.

This is really just an overview which should give plenty to help you figure out the steps. I can’t paste the exact code because it’s embedded in a client project, but I do refer to some of the TLF functions throughout that you have to make use of, if anyone can suggest improvements, please drop them in the comments.

Copied from the forum post…

Find TextRanges for misspelled words:

  • Get the first Paragraph using textFlow.getFirstLeaf().getParagraph()
  • Loop through all Paragraphs using para.getNextParagraph()
  • For each, run a Regex match (/bw+b/) on para.getText()
  • Spellcheck each result using Squiggly, and for bad words store a TextRange: TextRange(textFlow, para.getAbsoluteStart()+index, para.getAbsoluteStart()+index+word.length-1); where index is incremented to the position proceeding the end of each word (match or no match).

Spellcheck class:

  • Created a static SpellCheck class which loads language dictionary (downloaded from OpenOffice website) and a UserDictionary (stored as a simple text file)
  • Added methods for checkTextFlow(textFlow:TextFlow):Array which returns an array of “bad” TextRanges, a method for getSuggestions(word:String):Array and methods for checkWord() and addUserWord().

Context menu:

  • Extend ContainerController and override menuSelectHandler()… use an instance of my CustomContainerController when creating the TextFlow:
    textFlow.flowComposer.addController( new CustomContainerController() );
  • Loop through flowComposer.numLines, obtain each TextFlowLine from flowComposer and therefore each TextLine.
  • Determine if textLine.getBounds(container).container(container.mouseX, container.mouseY) to find the line they right-clicked.
  • Get the “raw text” for the line: textLine.textBlock.content.rawText.substr(textLine.textBlockBeginIndex);
  • Find the atom clicked: textLine.getAtomIndexAtPoint(container.stage.mouseX, container.stage.mouseY).
  • Find the starting atom of the word (reverse lookup for word boundary i.e.. ” ” or first char in raw text).
  • Determine the word itself by using regex to find the first word from this starting point (/bw+b/).
  • Add ContextMenuItems for “add to dictionary” and suggested words (for the latter also store start/end atoms in ContextMenuItem.data).
  • When user clicks a suggested word, use (interactionManager as EditManager).selectRange(data.start, data.end); (interactionManager as EditManager).insertText(data.word) where “data” is the data property of the clicked ContextMenuItem.

In my “EditableTextField” class, I call my SpellCheck.checkTextFlow() to get the bad TextRanges and…

  • Loop through the badRanges array.
  • Loop from range.absoluteStart to range.absoluteEnd for each TextRange.
  • Find TextFlowLine for “i” in this loop, and therefore the TextLine: containerController.flowComposer.findLineAtPosition(i); textFlowLine.getTextLine();
  • Find atom bounds using textLine.getAtomBounds(charIndex); where charIndex is: i – textFlowLine.absoluteStart.
    Underline… drawRect( bounds.x + textLine.x, bounds.y + textLine.y + bounds.height – textLine.descent – 1, bounds.width, 3)

I’m sure there is a more elegant way, but this seems to work. I believe I read Adobe are working on Squiggly for pure TLF, if not, I hope this helps somebody get on the right track.

Flash/Flex Builder <-> Flash Professional Asset Workflows

This post discusses the various workflows for producing SWFs with the standalone compiler that use graphical assets and animations created in Flash Professional (“Flash Pro”). At time of writing the latest version of Flash Pro is CS4, with CS5 briefly ou…

This post discusses the various workflows for producing SWFs with the standalone compiler that use graphical assets and animations created in Flash Professional (“Flash Pro”). At time of writing the latest version of Flash Pro is CS4, with CS5 briefly out in beta for a short while. Specifically we look at the methods that involve exporting SWCs and using the [Embed] metatag within class files.

Recently I posted a bug report regarding the [Embed] metatag, which led me to write this post in order to find out whether people are happy with their current workflows and how well others receive projects when it comes to handovers and maintenance.

Background

So you’re building an application, a game, or a website. Immediately you have two options when it comes to setting up your Flash project. You can create an FLA file, assign a document class and get coding, or you can fire up Flash(/Flex) Builder/FDT/Flash Develop et al, create a new Flex or AS3 project and compile it using the Flex SDK compiler. Pretty much every time I’ll opt for the latter because of the increased reliability of the application, and faster compile times.

Even if you use the first option, compiling in Flash Pro itself, you may be actually editing your code in Flash Builder or some other IDE, but the point is the compiler being used in the former is Flash Pro, and in the latter mxmlc/compc the Flex SDK compilers. For the purpose of this post we’ll be looking at Flex or AS3 projects using the Flex SDK compiler, and how to get assets from a FLA, into your project.

I’ve written the following to the best of my knowledge, but there are always tips and tricks that I may be missing, perhaps an entire workflow. If you spot any inaccuracies or flaws please let me know in the comments and I’ll change it ASAP.

Why an FLA at all?

You probably already know you can embed PNGs, SVG and other file types in your classes and never go near an FLA to get graphics into a SWF. When it comes to animations, you may use TweenLite or GTween to perform transitions, but when it comes to frame-based animation, character animation, or simply buttons and panels with hand-made flourishes you may want to use an FLA to create and animate these using the powerful timeline, graphics and animation tools within Flash Pro.

It’s at this point you ask yourself, how do I get these assets from a FLA into my project if I’m not compiling my project in Flash Pro?

The Workflows

Here are 5 methods for getting assets from an FLA, into a Flex or “pure AS3″ project. I’ve excluded those which are MXML-only as this post is not about Flex specifically.

1. Publish SWC from FLA

This method involves linking library symbols to classes, so instead of “MySymbol” in the class field, you have “com.package.MyClass” which refers to a class file in one of the FLAs classpaths. You must then turn on SWC export in the FLA Publish Settings panel, and most likely turn off “Automatically declare stage instances” in the ActionScript 3 settings panel to avoid errors where your class has defined properties for items on stage. Finally add all of the required classpaths that the linked classes will be using (that could include 3rd party libraries) to avoid any compile-time errors.

When you publish the SWF it’ll also publish a SWC in the same folder. You add that SWC to your AS3 project and the classes/symbols compiled into it become available for use in your code.

Pros:

This method keeps any timeline ActionScript, great for complex, nested or multi-state animations.

Cons:

You have to compile the FLA every time you change a class linked to a symbol, in reality that can mean toggling to Flash, exporting the SWC, toggling back to Flash Builder, refreshing the project to re-build the SWC indexes and then recompiling the project here also.

You have to make sure the classes your symbols are linked to are not in the main project source directory (or any directory the project is set to reference as source code). If you don’t do this you will likely not see your graphics/animations appear because the Flex linker will find the class definition first, not the definition that is inside the SWC due to the compilation order.

You have to add all required classpaths to the FLA, possibly every classpath your project is using.

Flash Builder will not report errors in the code used and you lose the ability to Ctrl/Cmd+Click to go to source.

You don’t have access to items on stage immediately, the workaround is pretty painful (link).

Summary:

Whilst this is really the only sensible method for keeping timeline code, the cons make it a really un-intuitive and frustrating process. If anyone can suggest a way to improve this I’ll owe you quite a few beers.

2. [Embed] tag above a class declaration

Example:

Code:

package my.package {
  [Embed(source="assets/some.swf", symbol="SymbolName")]
  public class MyClass {
  // code
  }
}

Here we’re simply using the SWF produced by an FLA to store our symbols. The FLA does not link any symbols to any classes itself, the library is simply full of vanilla MovieClips. In our class files we add the [Embed] tag and that binds the symbol from the SWF to the class, so that when we create a new instance of that class, we will also get the graphics from the library symbol.

Pros:

You don’t have to re-compile the FLA unless your graphics actually change.

You can spend more time in your coding environment and not toggle back and forth between it and Flash Pro.

You get real-time compilation errors in the Problems panel of Eclipse because the code is not coming from a SWC.

Cons:

It strips ActionScript from the timeline of your symbols. If your symbol is an animation, and you had a few stop frames in there, perhaps one per labelled segment of animation, you’ll lose these and the animation will just run through on loop. What you see coders do to circumvent this is use addFrameScript(5, stop); for every stop frame, or even using lots of frame labels to act as meta-tags for code replacement (link).

Any children of the symbol lose any typing, so if you’ve added a couple of MyButton’s or a MyCustomWidget to your symbol on it’s timeline, those become plain MovieClips. This is a huge problem which relegates this method to animations only.

3. [Embed] tag above a class property declaration

This method involves using the [Embed] tag above a class property, for example:

Code:

[Embed(source="assets/some.swf", symbol="MySymbol")]
private var MySymbol:Class;
 
// later on in a function...
 
var myInstance:Sprite = new MySymbol();

So you can probably guess this is more of a composition-based approach, which works well for simple graphics, for multi-frame MovieClips you’d type myInstance as MovieClip, and tell it to stop().

With this method you’re instance will either be a SpriteAsset (extends Sprite) or a MovieClipAsset (extends MovieClip), you cannot cast it to a custom class, so for a symbol that is meant to represent a contact form, with an instance of MyButtonClass or even just some TextFields in it, this will fail.

4. Embed metatag to embed a whole SWF

There’s also another attribute available to the Embed metatag, that’s mimeType. If you remove the symbol attribute and replace it with mimeType=”application/octet-stream” it will embed the entire SWF and preserve the class associations set up in the library, i.e. you won’t have to have the instances typed to Sprite(/Asset) or MovieClip(/Asset).

When you embed a symbol from a SWF in this way, you can then use Loader to get at the classes within:

Code:

[Embed(source="assets/some.swf", mimeType="application/octet-stream")]
private var MySWF:Class;
 
// within a method
var bytes:ByteArray = (new MySWF() as ByteArray);
var loader:Loader = new Loader();
loader.loadBytes(bytes, new LoaderContext(false, ApplicationDomain.currentDomain));
 
// wait for loader to dispatch Event.COMPLETE and...
var myClass:Class = loader.contentLoaderInfo.applicationDomain.getDefinition("com.package.MyClass");
var myInstance:DisplayObject = new myClass();
// myInstance is now an instance of the class linked to it

Pros:

Great way to provide a library symbol a class/some behaviour without having to constantly re-compile the FLA.

Cons:

I think you’ll agree that’s not a great option if you have a lot of symbols or symbols which have others nested within. There are libraries to help with this, but…

No strict typing.

5. Runtime loading a SWF

Perhaps the oldest option here, loading a SWF at runtime allows you to pull out symbols/classes using the applicationDomain.getDefinition() function as described in method 3. If you’re already familiar with the getDefinitionByName() utility this works pretty much the same, but you are targeting a specific SWF’s classes.

Pros:

Great for loading content that rarely changes, such as fonts.

Cons:

Magic strings. Making a string a constant does not make it any less hacky, if you change the string in your FLA, your constant is meaningless, and you’ll only find out about it at runtime if that piece of code executes.

You’ll have to export your SWF from an FLA or using one of the other techniques which means you’ll also have some of the problems associated with those.

Feedback

So what route do you take? Please also state the type of project: application, game, website; it’s quite possible that people building applications simply never encounter these issues due to the primarily scripted animation and simple non-hierarchical graphical assets.

You may also want to consider how easy it will be to start compiling in Flash Professional in order to take advantage of CS5’s export to iPhone, this can impact your decision on which method you use.

Overall I feel that whilst choice is great, each method seems to have pretty serious downsides, and I’m yet to find one that doesn’t make for a less than pleasant rinse-and-repeat workflow. It would be nice to be able use Flash Pro to speed up the process of preparing game assets, laying stuff out on stage, animating on the timeline… but there are too many down-sides associated with this, and too many idiosyncrasies to learn just to get things working, I hope in future the two tools can be brought closer together perhaps through the new file format.

As usual, all comments welcome, unless you’re the bargain-dishwasher spammer.

Further reading

http://www.bit-101.com/blog/?p=853

http://www.bit-101.com/blog/?p=864

http://gskinner.com/blog/archives/2007/03/using_flash_sym.html

http://www.airtightinteractive.com/news/?p=327

Also the Flex Livedocs, which certainly don’t explain all of these methods in nearly enough detail or context, which makes using the Flex SDK a darker art than it could be.

Nexus One Review

I was lucky enough to receive one of the first waves of Nexus One’s (N1) from Google’s direct online shop. Before I go on, the shopping experience was a little too slick IMHO. I signed in with my Gmail account, clicked buy, clicked confirm and it was shi…

I was lucky enough to receive one of the first waves of Nexus One’s (N1) from Google’s direct online shop. Before I go on, the shopping experience was a little too slick IMHO. I signed in with my Gmail account, clicked buy, clicked confirm and it was shipping, if you’ve used Google Checkout before they will likely have your card details and address. You do have 15 mins to cancel the order though. When you see Google’s ever growing list of properties getting together you can see why they are so immensely disruptive.

So the Nexus One, possibly erroneously construed as the “Google Phone”, when in reality Google have already sold two Android dev phones. The N1 is more like the first of many in a Google Phone shop, which if you ask me is pretty much like Phones 4 U. A way of purchasing a sim-free or network contracted phone from a broker.

I was a little hesitant about this phone, it was invariably going to be compared to the iPhone due to the way it was positioned, the capabilities, the Android market and the form factor (albeit slimmer). So with that on with the review…

Nexus One

Hardware

It has the usual “superphone” (more on that another time) credentials; a large capacitive touchscreen (albeit a much improved OLED), sensors galore, but the most standout feature is probably the 1GHz Snapdragon CPU. It’s a huge risk to put such a beast in a small device with current battery technology. This thing has the potential to drink a lithium ion like a student with a beer bong. The Acer A1 (which I had very briefly) suffers from this, it just cannot tame the CPU to satisfy the tiny battery. It’s not just the CPU burning through electrons, Android itself is architected to be a multi-tasking, never-quit-an-app OS. But I’m pleased to say N1 deals with this well without resorting to task-killer apps. The battery is large enough (but if a 2000mAh came out of course I’d get it), and it managed memory hyper effectively through Android 2.1 and a couple of power management chips on the motherboard.

It’s fair to continue to make comparisons to the iPhone 3GS, there are a few things the iPhone wins out on, which considering it’s an older device is still encouraging, but on the whole the N1 is equally polished, with a super hard yet soft to the touch Teflon coating, it’s what the iPhone might look like if aesthetics weren’t so highly weighted in the design (that’s not a dig, it’s a design philosphy that makes Apple products so desirable). Every lesson and trick learned from building and using the iPhone has been considered by HTC.

The N1 comes with 512Mb of RAM (yep!), but only 4Gb of space on the SD card in order to reduce the purchase cost. The point it it’s a removable micro-SD card, these things already cost peanuts, come in up to 32Gb (for the iPhone comparison), and will continue to fall in price as the sizes go up this year.

The camera is a good 5MP shooter, with intelligent focus, LED flash, and a good lense. I think the one to look out for in this department will be the Sony Ericsson X10, which has all their camera know-how surrounding an 8MP ready to blitz the competition. Without going onto talking about Android itself just yet, suffice to say your immediate sharing options are impressive.

There are plenty of little touches which make it pleasant to use. The myriad sensors; proximity to dim the screen and prevent accidental touches, compass to support immersive augmented reality, trackball which if you ask me provides that essential accuracy required for some tasks which touch-screens can really let you down on, and doubles up as a tri-color indicator for notifications. The combination of these sensors and powerful CPU really starts to make sense when you try applications like Google Goggles. This is a visual search app, you point the camera, shoot, it scans the image for text and details, and will recognise and bring up results for books, barcodes, media, paintings, scan business cards and plenty more. The thing is, it’s so fast, the scan takes several seconds on the Acer, on the N1 it does it in one swipe, and on the N1 it also adjusts the flash brighter and dimmer until it gets a good image.

I had heard of it’s secondary mic, used for noise cancellation, but I didn’t expect to have someone remark on the quality of the call the first time I made one, comparable to a good quality land-line.

Software

Perhaps a killer feature of Android is Google’s role on the net. If you are a Google user, you will get a shockingly good setup experience. I entered my email address and password, it downloaded my calendars, gmail, contacts (with photos and maps) and that was it, setup was 1 click. Even more scary, it also populated my Gallery with live images from my Picasa account, which I use as a backup for Flickr, but I may switch over now.

Android is through and through a web OS. You really get a feeling for interconnectivity between apps and services on Android. Not only does it allow developers to write any app they desire with no approval required, you can write background services, fullscreen apps, widgets or live wallpapers. The OS itself it built on top of a system of notifications and intents that allow these things to communicate and interact in a secure manner. So when you open a photo you get sharing options for all the apps that registered as such, Picasa, Flickr, Email, SMS, from built-in to 3rd party and back again.

For the developers reading this, you can write in Java (optionally using XML layouts), Webkit (HTML/JS/CSS) or native C using the ADT plugin for Eclipse and supplied emulator. However the way it has been built allows you to leverage all the layers below, so you can write an app in JavaScript using Webkit, and embed a Java or native C class exposed as a javascript function, for real number crunching power.

The OS itself is responsive and polished, but it doesn’t do anything to sacrifice what is so important in devices you rely on when you need something done fast, devices such as phones and cars. When designing a touch-screen devices it’s easy to lose speed and efficiency amongst gloss and animation, that’s why the N1 has a Car Home app that provides instant voice enabled access to navigation, search and calling (I’ve heard this app can be launched whenever you put it in a car docking cradle). On top of that every text input is voice enabled, you can speak your search input or SMS messages. This can be a complete joke on some devices, but Google does this on a server, a server that has been learning from millions of Google Voice transcripts the last couple of years, this makes it very accurate indeed.

App-Store vs. Android Market

I can’t believe those professional journalists saying that there’s no competition because the App-Store has ~120k apps, and Android Market only has ~20k… Surely that’s a given because of how long these devices have been out, the Android Market targets a much much wider range of devices from several manufacturers from phones to tablets and TVs, and dare I say a great deal more potential customers than the App-Store. It’s just a matter of months.

The purchase experience is definitely better than the App-Store in 2.1. The Market app (screenshot) itself is much like the App-Store app, full-screen image previews, top free/paid, and purchase is a single click with instant download and install. Apple have the edge on how it looks, but with Market you can purchase a paid app and refund it within 24 hours, this gets around approval/testing because if it doesn’t work on a brand new handset yet you can just refund it, it also means you don’t always need a trial version (however that can be a good marketing technique).

You can of course also purchase direct from developers because you do not have to use Google’s own Market, or you can use some 3rd party markets that have sprung up, in particular for adult content.

So that’s it, a pretty positive review so far. I’ll update if anything changes. HTC are one to watch in 2010 that’s a given. Something that I’ve taken away from this is that we are finally getting to where us mobile-fanatics have been wanting to get to for some time. That was the promise that your mobile would be your primary device, not your laptop or desktop. IMHO, laptops and desktops will be the exclusive domain of software developers.