Zum Hauptinhalt wechseln

 Subscribe

With over 3 million published mobile apps, how do you make yours stand out?

A good idea is a useful starting place, but it isn’t enough. You need to know in real time what your users are experiencing, respond nimbly to their likes and dislikes, and fix rapidly any problems they have.

Apps are easy to publish. Anyone can do it. You don’t need cardboard boxes any more. Someone in a bedroom can sell millions of copies of a game or neat utility just as easily as a big corporation. And a twitter storm image[4][11]can be just as effective as the efforts of a big old marketing department. But if you do get high up the charts, you are balanced on a knife edge. Apps are difficult to make sticky, and users are fickle. Low price or free-to-use means a low barrier to trying an alternative, and any experience the users don’t enjoy can take you from high success to abject failure in no time.

An essential for success is fast reaction to any bad experience the users might be having. Once upon a time, we could put out our software and then ask our users to send us a letter about their complaints and suggestions. Those days are long gone, and we need to respond very rapidly.

Bad experiences primarily come in two ways:

  • Crashes. The app falls over or at least doesn’t work right.
  • Bad user experience. The user can’t find how to do something, or has to do too much to achieve a simple goal, or easily makes mistakes. Or the feature is just not very attractive.

The key is to know what your users are experiencing before they have time to complain. And don’t just ask your friends. This is what analytics is all about.

Build

You include in your app a small SDK, which monitors crashes and your users’ actions. All your users’ devices send the monitor data back to an analytics platform, where you can read metrics and analyze failures. And you can set up alerts so that if there’s an alarming increase in issues, you’ll know about it straight away.

Measure

Usage can be measured in many ways. Firstly, there are simple counts of how many times a feature is used or how many users are running the application daily image[6][11]or monthly. Or more usefully, how many times the user achieves an expected outcome with the feature: wins the game or composes the tune or buys the product. Or on a smaller scale, just takes a single step towards a larger goal. Then there are timings and key counts: if a user takes a long time or many taps to complete a task, it might indicate difficulty.

Most of the standard metrics can be collected with minimal or no code required, by the analytics tool using hooks into standard application frameworks. To get metrics that are specific to your application, you insert a few lines of code into your app, which call the analytics SDK to send the counts, logs or metrics to the analytics portal.

Unlike diagnostic tracing, usage tracking isn’t an afterthought: you design it into each feature. As part of your planning for a new user story, you decide what you’re going to measure, and how you think the measurements will be affected by the new code. Will users complete the game more quickly?  Will the ratio of games completed to games started increase?

Learn

In fact there’s a bigger principle here. How do you plan your upcoming features?

Don’t guess about what will work for your users. Don’t plan inflexibly for weeks or months to come. Build your new feature, measure what happens to it in the live app, and learn from the results. Did people use it? Did it help them achieve their goals? Did it raise your rating in the app store? Did it make you more money? What you do next should depend heavily on the answers. Maybe you’ll scrap the feature. Maybe you’ll tweak the user experience. Maybe you’ll count it a success and go on to the next big idea.

And then, as they say, rinse and repeat. Your software lifecycle is about a succession of small steps, assessing at each stage how the app worked for the users – both in terms of performance (it did what you expected in good time) and usage (whether and how people used it).This approach means you have to be very flexible in your planning. Your initial idea might be substantially modified after multiple iterations.

There are interesting variants to this process. For example, in A|B testing, you show one experience to some of your users, and another to the rest, and then see which proves the more popular or easy to use. In mobile device apps, updating the software isn’t quite as easy to do as in a web app. Users don’t always take every update as soon as it’s available, and they don’t want to have to do it too often. However, you can still do a planned series of feature experiments. You could build both alternatives of an A|B test into a single release, and get your software to call home for instructions about which to expose. After a period of analysis, if it turns out that A is better than B, you can signal all instances to switch to that variant. If you have access to a cohort of beta testers – friends and colleagues or recruits from your existing user base – then you can more easily persuade them to install a new version of your app at frequent intervals.  This allows you to test each feature on a relatively tame population before letting it out into the wild. There are utilities that integrate the management of beta releases with analytics and assessment of the results.

Application Insights

Here at Microsoft we are working hard to build an analytics platform that enables you to get a 360° view of your application, both for mobile apps as well as for services that may back your application. We are striving to become an integral part of the development cycle and are making it extra easy to add our SDKs to new and existing applications, for (almost) zero effort for developers.

Application Insights provides great support for both aspects we’ve discussed:

  • Diagnose crashes so that you can fix the problems before they have a significant impact on your customers.
  • Analyze usage patterns and find out how your customers use your app, so you can prioritize working on the scenarios they find important.

We recently acquired HockeyApp and are busy integrating its technology into Application Insights, so stay tuned for more great features for mobile developers.

Update 04/30/2015: We announced and released support for iOS and Android applications for Application Insights. The SDKs include auto-collection of session telemetry as well as custom events, metrics and of course crash analytics. Crash analytics includes symbolication of iOS stacks and de-obfuscation of Android back traces.

Of course, no matter what tools you use, you still need a good and fresh idea, and a heap of luck to build a successful mobile application. If nobody cares or is entertained by what your app does, it won’t rise to the top of the app stores. But then of course you probably heard that from the beta test feedback.

Cheers,

Frank

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning


Join the conversation