Halloween Costume ideas 2015

Digital archives contain as usually understood by professional archivists and historians.

Latest Post
2009 accessibility Aconcagua Administration Adventure Racing Adventure Travel Adventurists Advice Afghanistan Africa Alaska Alberto Contador Aleutian Islands Alex Honnold Alps Amazon Amherst Amherst Destinations Amherst Hikes Andes Android 1.5 Android 1.6 Android 2.0 Android 2.1 Android 2.2 Android 2.3 Android 2.3.3 Android 3.0 Android 3.2 Android 4.0 Android Design Android Developer Challenge Android Developer Phone Android Market Animals Animation and Graphics Annapurna Announcements Antarctic App Components App Resources Apps Archeology Arctic Arctic Ocean Argentina Art Asia Atacama Desert Atlantic Ocean August Australia Authentication Autumn Aviation Backpacking Backyard Nature Badwater Ultra Baffin Island Baltic Sea BASE Jumping Beach Belchertown Belchertown Destinations Belchertown Hikes Berkshires Best Practices Bhutan Blogging Tips Blogs Book Review Boots Boston Botswana Brazil Broad Peak California Camping Canada Canyoneering Carstensz Pyramid Catatan Puspitasari Central America Central Massachusetts Checkpoint Tracker Children Chile China Cho Oyu Cinta Wanita Circumnavigation Clay Climate Change Climbing Clothing Code Day Colorado Colrain Congo River Connecticut Connectivity Conservation Area Contests Cool Stuff Craft Cycling Dashboard Dave Cornthwaite Death Valley Debugging Denali Developer Console Developer Days Developer Labs Developer profiles Dhaulagiri Dinosaurs Discovery Channel Dolomites Earth Day Easter Island Easthampton Ed Viesturs Educational Eiger El Capitan Endurance Sports Environmental Erving Europe Events Everest Expedition Exploration Explorers Club Fair Fairy House Farm Film Festival Finland Fireworks Fish Hatchery Fitz Roy Food Fourth of July France Free Games Gasherbrum Gaya Hidup Wanita Gear General Adventure Gestures Giro d'Italia Gobi Desert Google I/O Google Play Google Play services Goshen GPS Granby Grand Canyon Greater Boston Greenland Grossology Exhibit Guidelines Hadley Hadley 350th Half Dome Hang Gliding Hawaii Health Hikes Under One Mile Hiking Himalaya History Holyoke Honduras Horse How-to Hubungan Hunting Ice Cream IME impossible2Possible Independence Day India Indoor info Info Seminar Input methods Intents Internet Interview io2010 Italy Japan JNI John Muir Trail Jordan July June Jungfrau K2 K7 Kalahari Kangchenjunga Karakoram Kayaking Kilimanjaro Lake Michigan Lance Armstrong Layout Leadville 100 Leverett Lhotse Libraries Lintas Peristiwa Location Location and Sensors London Long Riders Ludlow Maine Makalu Manaslu Maple Massachusetts Matterhorn Media and Camera Mendon Meru Peak Mexico Mini Golf Mississippi River Missouri River Mongolia Monson Mont Blanc Motivasi Mount Elbrus Mount Everest Mount Rainier Mountain Biking Mountain View Mountaineering Movies Mt. Shasta Munich Museums Music Nameless Tower Namibia Nanga Parbat NASA National Geographic Nature Navigation NDK Nepal New Hampshire New Zealand Newburyport North America North Pole Northampton Northfield Norway Novelet Nuptse Nusantara Nutrition Ocean Okavango Delta Olympics Open source OpenGL ES Optimization Oregon Orizaba Outdoor Outdoor Retailer Outside Magazine Pacific Ocean Packs Paddling Pakistan Palmer Panduan SEO Parade Paragliding Patagonia Pelham Peru Petting Zoo Photography Playground Plum Island Poland Pool Pottery Pumpkins Quabbin Reservoir Quality Quick Search Box Rafting Rahasia Wanita Ray Zahab Reggio Emilia Research Resources Review Road Rally Rowing Roz Savage Running Sailing Sample code Sandbox School Science Scuba Diving SDK updates Sensors September Seven Summits Shelburne Falls Shisha Pangma Shutesbury Silk Road Site News Skateboarding skiing Skydiving Slacklining Sleeping Bags Snowboarding Solstice South Africa South America South Deerfield South Georgia South Hadley South Natick South Pacific South Pole Southern Ocean Space Speech Input Springfield Stand Up Paddling Storytime Strawberries Sturbridge Summer Summer Camp Summit Sunderland Survival Sutton Swimming Switzerland Tanzania Technology Tel Aviv Tents Testing Teva Mountain Games Text and Input Text-to-Speech Thrifty Tibet Torres Del Paine Touch Tour d'Afrique Tour de France Tour Divide Tower Trail Running Train Trango Towers TransRockies Travel Trekking Triathlon Turkey Turner's Falls Tutorial Ueli Steck Ultra Running Ultramarathon UMass United States USA Pro Cycling Challenge User Interface Utah Vancouver Vermont Video Wadi Rum Wakhan Wanita dan Bisnis Water Websites Western Massachusetts Westhampton Widgets Wildlife Williamstown Wingsuits Winter Wisconsin Worcester World Championship Wyoming Yemen Yosemite Zoo


[This post is by Chet Haase, an Android engineer who specializes in graphics and animation, and who occasionally posts videos and articles on these topics on his CodeDependent blog at graphics-geek.blogspot.com. — Tim Bray]

One of the new features ushered in with the Honeycomb release is a new animation system, a set of APIs in a whole new package (android.animation) that makes animating objects and properties much easier than it was before.

"But wait!" you blurt out, nearly projecting a mouthful of coffee onto your keyboard while reading this article, "Isn't there already an animation system in Android?"

Animation Prior to Honeycomb

Indeed, Android already has animation capabilities: there are several classes and lots of great functionality in the android.view.animation package. For example, you can move, scale, rotate, and fade Views and combine multiple animations together in an AnimationSet object to coordinate them. You can specify animations in a LayoutAnimationController to get automatically staggered animation start times as a container lays out its child views. And you can use one of the many Interpolator implementations like AccelerateInterpolator and Bounce to get natural, nonlinear timing behavior.

But there are a couple of major pieces of functionality lacking in the previous system.

For one thing, you can animate Views... and that's it. To a great extent, that's okay. The GUI objects in Android are, after all, Views. So as long as you want to move a Button, or a TextView, or a LinearLayout, or any other GUI object, the animations have you covered. But what if you have some custom drawing in your view that you'd like to animate, like the position of a Drawable, or the translucency of its background color? Then you're on your own, because the previous animation system only understands how to manipulate View objects.

The previous animations also have a limited scope: you can move, rotate, scale, and fade a View... and that's it. What about animating the background color of a View? Again, you're on your own, because the previous animations had a hard-coded set of things they were able to do, and you could not make them do anything else.

Finally, the previous animations changed the visual appearance of the target objects... but they didn't actually change the objects themselves. You may have run into this problem. Let's say you want to move a Button from one side of the screen to the other. You can use a TranslateAnimation to do so, and the button will happily glide along to the other side of the screen. And when the animation is done, it will gladly snap back into its original location. So you find the setFillAfter(true) method on Animation and try it again. This time the button stays in place at the location to which it was animated. And you can verify that by clicking on it - Hey! How come the button isn't clicking? The problem is that the animation changes where the button is drawn, but not where the button physically exists within the container. If you want to click on the button, you'll have to click the location that it used to live in. Or, as a more effective solution (and one just a tad more useful to your users), you'll have to write your code to actually change the location of the button in the layout when the animation finishes.

It is for these reasons, among others, that we decided to offer a new animation system in Honeycomb, one built on the idea of "property animation."

Property Animation in Honeycomb

The new animation system in Honeycomb is not specific to Views, is not limited to specific properties on objects, and is not just a visual animation system. Instead, it is a system that is all about animating values over time, and assigning those values to target objects and properties - any target objects and properties. So you can move a View or fade it in. And you can move a Drawable inside a View. And you can animate the background color of a Drawable. In fact, you can animate the values of any data structure; you just tell the animation system how long to run for, how to evaluate between values of a custom type, and what values to animate between, and the system handles the details of calculating the animated values and setting them on the target object.

Since the system is actually changing properties on target objects, the objects themselves are changed, not simply their appearance. So that button you move is actually moved, not just drawn in a different place. You can even click it in its animated location. Go ahead and click it; I dare you.

I'll walk briefly through some of the main classes at work in the new system, showing some sample code when appropriate. But for a more detailed view of how things work, check out the API Demos in the SDK for the new animations. There are many small applications written for the new Animations category (at the top of the list of demos in the application, right before the word App. I like working on animation because it usually comes first in the alphabet).

In fact, here's a quick video showing some of the animation code at work. The video starts off on the home screen of the device, where you can see some of the animation system at work in the transitions between screens. Then the video shows a sampling of some of the API Demos applications, to show the various kinds of things that the new animation system can do. This video was taken straight from the screen of a Honeycomb device, so this is what you should see on your system, once you install API Demos from the SDK.

Animator

Animator is the superclass of the new animation classes, and has some of the common attributes and functionality of the subclasses. The subclasses are ValueAnimator, which is the core timing engine of the system and which we'll see in the next section, and AnimatorSet, which is used to choreograph multiple animators together into a single animation. You do not use Animator directly, but some of the methods and properties of the subclasses are exposed at this superclass level, like the duration, startDelay and listener functionality.

The listeners tend to be important, because sometimes you want to key some action off of the end of an animation, such as removing a view after an animation fading it out is done. To listen for animator lifecycle events, implement the AnimatorListener interface and add your listener to the Animator in question. For example, to perform an action when the animator ends, you could do this:

    anim.addListener(new Animator.AnimatorListener() {
public void onAnimationStart(Animator animation) {}
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
public void onAnimationCancel(Animator animation) {}
public void onAnimationRepeat(Animator animation) {}
});

As a convenience, there is an adapter class, AnimatorListenerAdapter, that stubs out these methods so that you only need to override the one(s) that you care about:


anim.addListener(new AnimatorListenerAdapter() {
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
});

ValueAnimator

ValueAnimator is the main workhorse of the entire system. It runs the internal timing loop that causes all of a process's animations to calculate and set values and has all of the core functionality that allows it to do this, including the timing details of each animation, information about whether an animation repeats, listeners that receive update events, and the capability of evaluating different types of values (see TypeEvaluator for more on this). There are two pieces to animating properties: calculating the animated values and setting those values on the object and property in question. ValueAnimator takes care of the first part; calculating the values. The ObjectAnimator class, which we'll see next, is responsible for setting those values on target objects.

Most of the time, you will want to use ObjectAnimator, because it makes the whole process of animating values on target objects much easier. But sometimes you may want to use ValueAnimator directly. For example, the object you want to animate may not expose setter functions necessary for the property animation system to work. Or perhaps you want to run a single animation and set several properties from that one animated value. Or maybe you just want a simple timing mechanism. Whatever the case, using ValueAnimator is easy; you just set it up with the animation properties and values that you want and start it. For example, to animate values between 0 and 1 over a half-second, you could do this:

    ValueAnimator anim = ValueAnimator.ofFloat(0f, 1f);
anim.setDuration(500);
anim.start();

But animations are a bit like the tree in the forest philosophy question ("If a tree falls in the forest and nobody is there to hear it, does it make a sound?"). If you don't actually do anything with the values, does the animation run? Unlike the tree question, this one has an answer: of course it runs. But if you're not doing anything with the values, it might as well not be running. If you started it, chances are you want to do something with the values that it calculates along the way. So you add a listener to it, to listen for updates at each frame. And when you get the callback, you call getAnimatedValue(), which returns an Object, to find out what the current value is.

    anim.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() {
public void onAnimationUpdate(ValueAnimator animation) {
Float value = (Float) animation.getAnimatedValue();
// do something with value...
}
});

Of course, you don't necessarily always want to animate float values. Maybe you need to animate something that's an integer instead:

    ValueAnimator anim = ValueAnimator.ofInt(0, 100);

or in XML:

    <animator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueFrom="0"
android:valueTo="100"
android:valueType="intType"/>

In fact, maybe you need to animate something entirely different, like a Point, or a Rect, or some custom data structure of your own. The only types that the animation system understands by default are float and int, but that doesn't mean that you're stuck with those two types. You can to use the Object version of the factory method, along with a TypeEvaluator (explained later), to tell the system how to calculate animated values for this unknown type:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(pointEvaluator, p0, p1);

There are other animation attributes that you can set on a ValueAnimator besides duration, including:

  • setStartDelay(long): This property controls how long the animation waits after a call to start() before it starts playing.
  • setRepeatCount(int) and setRepeatMode(int): These functions control how many times the animation repeats and whether it repeats in a loop or reverses direction each time.
  • setInterpolator(TimeInterpolator): This object controls the timing behavior of the animation. By default, animations accelerate into and decelerate out of the motion, but you can change that behavior by setting a different interpolator. This function acts just like the one of the same name in the previous Animation class; it's just that the type of the parameter (TimeInterpolator) is different from that of the previous version (Interpolator). But the TimeInterpolator interface is just a super-interface of the existing Interpolator interface in the android.view.animation package, so you can use any of the existing Interpolator implementations, like Bounce, as arguments to this function on ValueAnimator.

ObjectAnimator

ObjectAnimator is probably the main class that you will use in the new animation system. You use it to construct animations with the timing and values that ValueAnimator takes, and also give it a target object and property name to animate. It then quietly animates the value and sets those animated values on the specified object/property. For example, to fade out some object myObject, we could animate the alpha property like this:

    ObjectAnimator.ofFloat(myObject, "alpha", 0f).start();

Note, in this example, a special feature that you can use to make your animations more succinct; you can tell it the value to animate to, and it will use the current value of the property as the starting value. In this case, the animation will start from whatever value alpha has now and will end up at 0.

You could create the same thing in an XML resource as follows:

    <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueTo="0"
android:propertyName="alpha"/>

Note, in the XML version, that you cannot set the target object; this must be done in code after the resource is loaded:

    ObjectAnimator anim = AnimatorInflator.loadAnimator(context, resID);
anim.setTarget(myObject);
anim.start();

There is a hidden assumption here about properties and getter/setter functions that you have to understand before using ObjectAnimator: you must have a public "set" function on your object that corresponds to the property name and takes the appropriate type. Also, if you use only one value, as in the example above, your are asking the animation system to derive the starting value from the object, so you must also have a public "get" function which returns the appropriate type. For example, the class of myObject in the code above must have these two public functions in order for the animation to succeed:

    public void setAlpha(float value);
public float getAlpha();

So by passing in a target object of some type and the name of some property foo supposedly on that object, you are implicitly declaring a contract that that object has at least a setFoo() function and possibly also a getFoo() function, both of which handle the type used in the animation declaration. If all of this is true, then the animation will be able to find those setter/getter functions on the object and set values during the animation. If the functions do not exist, then the animation will fail at runtime, since it will be unable to locate the functions it needs. (Note to users of ProGuard, or other code-stripping utilities: If your setter/getter functions are not used anywhere else in the code, make sure you tell the utility to leave the functions there, because otherwise they may get stripped out. The binding during animation creation is very loose and these utilities have no way of knowing that these functions will be required at runtime.)

View properties

The observant reader, or at least the ones that have not yet browsed on to some other article, may have pinpointed a flaw in the system thus far. If the new animation framework revolves around animating properties, and if animations will be used to animate, to a large extent, View objects, then how can they be used against the View class, which exposes none of its properties through set/get functions?

Excellent question: you get to advance to the bonus round and keep reading.

The way it works is that we added new properties to the View class in Honeycomb. The old animation system transformed and faded View objects by just changing the way that they were drawn. This was actually functionality handled in the container of each View, because the View itself had no transform properties to manipulate. But now it does: we've added several properties to View to make it possible to animate Views directly, allowing you to not only transform the way a View looks, but to transform its actual location and orientation. Here are the new properties in View that you can set, get and animate directly:

  • translationX and translationY: These properties control where the View is located as a delta from its left and top coordinates which are set by its layout container. You can run a move animation on a button by animating these, like this: ObjectAnimator.ofFloat(view, "translationX", 0f, 100f);.
  • rotation, rotationX, and rotationY: These properties control the rotation in 2D (rotation) and 3D around the pivot point.
  • scaleX and scaleY: These properties control the 2D scaling of a View around its pivot point.
  • pivotX and pivotY: These properties control the location of the pivot point, around which the rotation and scaling transforms occur. By default, the pivot point is centered at the center of the object.
  • x and y: These are simple utility properties to describe the final location of the View in its container, as a sum of the left/top and translationX/translationY values.
  • alpha: This is my personal favorite property. No longer is it necessary to fade out an object by changing a value on its transform (a process which just didn't seem right). Instead, there is an actual alpha value on the View itself. This value is 1 (opaque) by default, with a value of 0 representing full transparency (i.e., it won't be visible). To fade a View out, you can do this: ObjectAnimator.ofFloat(view, "alpha", 0f);

Note that all of the "properties" described above are actually available in the form of set/get functions (e.g., setRotation() and getRotation() for the rotation property). This makes them both possible to access from the animation system and (probably more importantly) likely to do the right thing when changed. That is, you don't want to scale an object and have it just sit there because the system didn't know that it needed to redraw the object in its new orientation; each of the setter functions takes care to run the appropriate invalidation step to make the rendering work correctly.

AnimatorSet

This class, like the previous AnimationSet, exists to make it easier to choreograph multiple animations. Suppose you want several animations running in tandem, like you want to fade out several views, then slide in other ones while fading them in. You could do all of this with separate animations and either manually starting the animations at the right times or with startDelays set on the various delayed animations. Or you could use AnimatorSet to do all of that for you. AnimatorSet allows you to animations that play together, playTogether(Animator...), animations that play one after the other, playSequentially(Animator...), or you can organically build up a set of animations that play together, sequentially, or with specified delays by calling the functions in the AnimatorSet.Builder class, with(), before(), and after(). For example, to fade out v1 and then slide in v2 while fading it, you could do something like this:

    ObjectAnimator fadeOut = ObjectAnimator.ofFloat(v1, "alpha", 0f);
ObjectAnimator mover = ObjectAnimator.ofFloat(v2, "translationX", -500f, 0f);
ObjectAnimator fadeIn = ObjectAnimator.ofFloat(v2, "alpha", 0f, 1f);
AnimatorSet animSet = new AnimatorSet().play(mover).with(fadeIn).after(fadeOut);;
animSet.start();

Like ValueAnimator and ObjectAnimator, you can create AnimatorSet objects in XML resources as well.

TypeEvaluator

I wanted to talk about just one more thing, and then I'll leave you alone to explore the code and play with the API demos. The last class I wanted to mention is TypeEvaluator. You may not use this class directly for most of your animations, but you should that it's there in case you need it. As I said earlier, the system knows how to animate float and int values, but otherwise it needs some help knowing how to interpolate between the values you give it. For example, if you want to animate between the Point values in one of the examples above, how is the system supposed to know how to interpolate the values between the start and end points? Here's the answer: you tell it how to interpolate, using TypeEvaluator.

TypeEvaluator is a simple interface that you implement that the system calls on each frame to help it calculate an animated value. It takes a floating point value which represents the current elapsed fraction of the animation and the start and end values that you supplied when you created the animation and it returns the interpolated value between those two values at that fraction. For example, here's the built-in FloatEvaluator class used to calculate animated floating point values:

    public class FloatEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
float startFloat = ((Number) startValue).floatValue();
return startFloat + fraction * (((Number) endValue).floatValue() - startFloat);
}
}

But how does it work with a more complex type? For an example of that, here is an implementation of an evaluator for the Point class, from our earlier example:

    public class PointEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
Point startPoint = (Point) startValue;
Point endPoint = (Point) endValue;
return new Point(startPoint.x + fraction * (endPoint.x - startPoint.x),
startPoint.y + fraction * (endPoint.y - startPoint.y));
}
}

Basically, this evaluator (and probably any evaluator you would write) is just doing a simple linear interpolation between two values. In this case, each 'value' consists of two sub-values, so it is linearly interpolating between each of those.

You tell the animation system to use your evaluator by either calling the setEvaluator() method on ValueAnimator or by supplying it as an argument in the Object version of the factory method. To continue our earlier example animating Point values, you could use our new PointEvaluator class above to complete that code:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(new PointEvaluator(), p0, p1);

One of the ways that you might use this interface is through the ArgbEvaluator implementation, which is included in the Android SDK. If you animate a color property, you will probably either use this evaluator automatically (which is the case if you create an animator in an XML resource and supply colors as values) or you can set it manually on the animator as described in the previous section.

But Wait, There's More!

There's so much more to the new animation system that I haven't gotten to. There's the repetition functionality, the listeners for animation lifecycle events, the ability to supply multiple values to the factory methods to get animations between more than just two endpoints, the ability to use the Keyframe class to specify a more complex time/value sequence, the use of PropertyValuesHolder to specify multiple properties to animate in parallel, the LayoutTransition class for automating simple layout animations, and so many other things. But I really have to stop writing soon and get back to working on the code. I'll try to post more articles in the future on some of these items, but also keep an eye on my blog at graphics-geek.blogspot.com for upcoming articles, tutorials, and videos on this and related topics. Until then, check out the API demos, read the overview of Property Animation posted with the 3.0 SDK, dive into the code, and just play with it.

The first tablets running Android 3.0 (“Honeycomb”) will be hitting the streets on Thursday Feb. 24th, and we’ve just posted the full SDK release. We encourage you to test your applications on the new platform, using a tablet-size AVD.

Developers who’ve followed the Android Framework’s guidelines and best practices will find their apps work well on Android 3.0. This purpose of this post is to provide reminders of and links to those best practices.

Moving Toward Honeycomb

There’s a comprehensive discussion of how to work with the new release in Optimizing Apps for Android 3.0. The discussion includes the use of the emulator; most developers, who don’t have an Android tablet yet, should use it to test and update their apps for Honeycomb.

While your existing apps should work well, developers also have the option to improve their apps’ look and feel on Android 3.0 by using Honeycomb features; for example, see The Android 3.0 Fragments API. We’ll have more on that in this space, but in the meantime we recommend reading Strategies for Honeycomb and Backwards Compatibility for advice on adding Honeycomb polish to existing apps.

Specifying Features

There have been reports of apps not showing up in Android Market on tablets. Usually, this is because your application manifest has something like this:

<uses-feature android:name="android.hardware.telephony" />

Many of the tablet devices aren’t phones, and thus Android Market assumes the app is not compatible. See the documentation of <uses-feature>. However, such an app’s use of the telephony APIs might well be optional, in which case it should be available on tablets. There’s a discussion of how to accomplish this in Future-Proofing Your App and The Five Steps to Future Hardware Happiness.

Rotation

The new environment is different from what we’re used to in two respects. First, you can hold the devices with any of the four sides up and Honeycomb manages the rotation properly. In previous versions, often only two of the four orientations were supported, and there are apps out there that relied on this in ways that will break them on Honeycomb. If you want to stay out of rotation trouble, One Screen Turn Deserves Another covers the issues.

The second big difference doesn’t have anything to do with software; it’s that a lot of people are going to hold these things horizontal (in “landscape mode”) nearly all the time. We’ve seen a few apps that have a buggy assumption that they’re starting out in portrait mode, and others that lock certain screens into portrait or landscape but really shouldn’t.

A Note for Game Developers

A tablet can probably provide a better game experience for your users than any handset can. Bigger is better. It’s going to cost you a little more work than developers of business apps, because quite likely you’ll want to rework your graphical assets for the big screen.

There’s another issue that’s important to game developers: Texture Formats. Read about this in Game Development for Android: A Quick Primer, in the section labeled “Step Three: Carefully Design the Best Game Ever”.

We've also added a convenient way to filter applications in Android Market based on the texture formats they support; see the documentation of <supports-gl-texture> for more details.

Happy Coding

Once you’ve held one of the new tablets in your hands, you’ll want to have your app not just running on it (which it probably already does), but expanding minds on the expanded screen. Have fun!


We are pleased to announce that the full SDK for Android 3.0 is now available to developers. The APIs are final, and you can now develop apps targeting this new platform and publish them to Android Market. The new API level is 11.

For an overview of the new user and developer features, see the Android 3.0 Platform Highlights.

Together with the new platform, we are releasing updates to our SDK Tools (r10) and ADT Plugin for Eclipse (10.0.0). Key features include:

  • UI Builder improvements in the ADT Plugin:
    • New Palette with categories and rendering previews. (details)
    • More accurate rendering of layouts to more faithfully reflect how the layout will look on devices, including rendering status and title bars to more accurately reflect screen space actually available to applications.
    • Selection-sensitive action bars to manipulate View properties.
    • Zoom improvements (fit to view, persistent scale, keyboard access) (details).
    • Improved support for <merge> layouts, as well as layouts with gesture overlays.
  • Traceview integration for easier profiling from ADT. (details)
  • Tools for using the Renderscript graphics engine: the SDK tools now compiles .rs files into Java Programming Language files and native bytecode.

To get started developing or testing applications on Android 3.0, visit the Android Developers site for information about the Android 3.0 platform, the SDK Tools, and the ADT Plugin.

[This post is by R. Jason Sams, an Android engineer who specializes in graphics, performance tuning, and software architecture. —Tim Bray]

Renderscript is a key new Honeycomb feature which we haven’t yet discussed in much detail. I will address this in two parts. This post will be a quick overview of Renderscript. A more detailed technical post with a simple example will be provided later.

Renderscript is a new API targeted at high-performance 3D rendering and compute operations. The goal of Renderscript is to bring a lower level, higher performance API to Android developers. The target audience is the set of developers looking to maximize the performance of their applications and are comfortable working closer to the metal to achieve this. It provides the developer three primary tools: A simple 3D rendering API on top of hardware acceleration, a developer friendly compute API similar to CUDA, and a familiar language in C99.

Renderscript has been used in the creation of the new visually-rich YouTube and Books apps. It is the API used in the live wallpapers shipping with the first Honeycomb tablets.

The performance gain comes from executing native code on the device. However, unlike the existing NDK, this solution is cross-platform. The development language for Renderscript is C99 with extensions, which is compiled to a device-agnostic intermediate format during the development process and placed into the application package. When the app is run, the scripts are compiled to machine code and optimized on the device. This eliminates the problem of needing to target a specific machine architecture during the development process.

Renderscript is not intended to replace the existing high-level rendering APIs or languages on the platform. The target use is for performance-critical code segments where the needs exceed the abilities of the existing APIs.

It may seem interesting that nothing above talked about running code on CPUs vs. GPUs. The reason is that this decision is made on the device at runtime. Simple scripts will be able to run on the GPU as compute workloads when capable hardware is available. More complex scripts will run on the CPU(s). The CPU also serves as a fallback to ensure that scripts are always able to run even if a suitable GPU or other accelerator is not present. This is intended to be transparent to the developer. In general, simpler scripts will be able to run in more places in the future. For now we simply leverage the CPU resources and distribute the work across as many CPUs as are present in the device.


The video above, captured through an Android tablet’s HDMI out, is an example of Renderscript compute at work. (There’s a high-def version on YouTube.) In the video we show a simple brute force physics simulation of around 900 particles. The compute script runs each frame and automatically takes advantage of both cores. Once the physics simulation is done, a second graphics script does the rendering. In the video we push one of the larger balls to show the interaction. Then we tilt the tablet and let gravity do a little work. This shows the power of the dual A9s in the new Honeycomb tablet.

Renderscript Graphics provides a new runtime for continuously rendering scenes. This runtime sits on top of HW acceleration and uses the developers’ scripts to provide custom functionality to the controlling Dalvik code. This controlling code will send commands to it at a coarse level such as “turn the page” or “move the list”. The commands the two sides speak are determined by the scripts the developer provides. In this way it’s fully customizable. Early examples of Renderscript graphics were the live wallpapers and 3d application launcher that shipped with Eclair.

With Honeycomb, we have migrated from GL ES 1.1 to 2.0 as the renderer for Renderscript. With this, we have added programmable shader support, 3D model loading, and much more efficient allocation management. The new compiler, based on LLVM, is several times more efficient than acc was during the Eclair-through-Gingerbread time frame. The most important change is that the Renderscript API and tools are now public.

The screenshot above was taken from one of our internal test apps. The application implements a simple scene-graph which demonstrates recursive script to script calling. The Androids are loaded from an A3D file created in Maya and translated from a Collada file. A3D is an on device file format for storing Renderscript objects.

Later we will follow up with more technical information and sample code.

Several weeks ago we released Android 2.3, which introduced several new forms of communication for developers and users. One of those, Near Field Communications (NFC), let developers get started creating a new class of contactless, proximity-based applications for users.

NFC is an emerging technology that promises exciting new ways to use mobile devices, including ticketing, advertising, ratings, and even data exchange with other devices. We know there’s a strong interest to include these capabilities into many applications, so we’re happy to announce an update to Android 2.3 that adds new NFC capabilities for developers. Some of the features include:

  • A comprehensive NFC reader/writer API that lets apps read and write to almost any standard NFC tag in use today.
  • Advanced Intent dispatching that gives apps more control over how/when they are launched when an NFC tag comes into range.
  • Some limited support for peer-to-peer connection with other NFC devices.

We hope you’ll find these new capabilities useful and we’re looking forward to seeing the innovative apps that you will create using them.

Android 2.3.3 is a small feature release that includes a new API level, 10.
Going forward, we expect most devices shipping with an Android 2.3 platform to run Android 2.3.3 (or later). For an overview of the API changes, see the Android 2.3.3 Version Notes. The Android 2.3.3 SDK platform for development and testing is available through the Android SDK Manager.

[This post is by Dianne Hackborn, a Software Engineer who sits very near the exact center of everything Android. — Tim Bray]

An important goal for Android 3.0 is to make it easier for developers to write applications that can scale across a variety of screen sizes, beyond the facilities already available in the platform:

  • Since the beginning, Android’s UI framework has been designed around the use of layout managers, allowing UIs to be described in a way that will adjust to the space available. A common example is a ListView whose height changes depending on the size of the screen, which varies a bit between QVGA, HVGA, and WVGA aspect ratios.

  • Android 1.6 introduced a new concept of screen densities, making it easy for apps to scale between different screen resolutions when the screen is about the same physical size. Developers immediately started using this facility when higher-resolution screens were introduced, first on Droid and then on other phones.

  • Android 1.6 also made screen sizes accessible to developers, classifying them into buckets: “small” for QVGA aspect ratios, “normal” for HVGA and WVGA aspect ratios, and “large” for larger screens. Developers can use the resource system to select between different layouts based on the screen size.

The combination of layout managers and resource selection based on screen size goes a long way towards helping developers build scalable UIs for the variety of Android devices we want to enable. As a result, many existing handset applications Just Work under Honeycomb on full-size tablets, without special compatibility modes, with no changes required. However, as we move up into tablet-oriented UIs with 10-inch screens, many applications also benefit from a more radical UI adjustment than resources can easily provide by themselves.

Introducing the Fragment

Android 3.0 further helps applications adjust their interfaces with a new class called Fragment. A Fragment is a self-contained component with its own UI and lifecycle; it can be-reused in different parts of an application’s user interface depending on the desired UI flow for a particular device or screen.

In some ways you can think of a Fragment as a mini-Activity, though it can’t run independently but must be hosted within an actual Activity. In fact the introduction of the Fragment API gave us the opportunity to address many of the pain points we have seen developers hit with Activities, so in Android 3.0 the utility of Fragment extends far beyond just adjusting for different screens:

  • Embedded Activities via ActivityGroup were a nice idea, but have always been difficult to deal with since Activity is designed to be an independent self-contained component instead of closely interacting with other activities. The Fragment API is a much better solution for this, and should be considered as a replacement for embedded activities.

  • Retaining data across Activity instances could be accomplished through Activity.onRetainNonConfigurationInstance(), but this is fairly klunky and non-obvious. Fragment replaces that mechanism by allowing you to retain an entire Fragment instance just by setting a flag.

  • A specialization of Fragment called DialogFragment makes it easy to show a Dialog that is managed as part of the Activity lifecycle. This replaces Activity’s “managed dialog” APIs.

  • Another specialization of Fragment called ListFragment makes it easy to show a list of data. This is similar to the existing ListActivity (with a few more features), but should reduce the common question about how to show a list with some other data.

  • The information about all fragments currently attached to an activity is saved for you by the framework in the activity’s saved instance state and restored for you when it restarts. This can greatly reduce the amount of state save and restore code you need to write yourself.

  • The framework has built-in support for managing a back-stack of Fragment objects, making it easy to provide intra-activity Back button behavior that integrates the existing activity back stack. This state is also saved and restored for you automatically.

Getting started

To whet your appetite, here is a simple but complete example of implementing multiple UI flows using fragments. We first are going to design a landscape layout, containing a list of items on the left and details of the selected item on the right. This is the layout we want to achieve:

The code for this activity is not interesting; it just calls setContentView() with the given layout:

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="match_parent">

<fragment class="com.example.android.apis.app.TitlesFragment"
android:id="@+id/titles" android:layout_weight="1"
android:layout_width="0px"
android:layout_height="match_parent" />

<FrameLayout android:id="@+id/details" android:layout_weight="1"
android:layout_width="0px"
android:layout_height="match_parent" />

</LinearLayout>

You can see here our first new feature: the <fragment> tag allows you to automatically instantiate and install a Fragment subclass into your view hierarchy. The fragment being implemented here derives from ListFragment, displaying and managing a list of items the user can select. The implementation below takes care of displaying the details of an item either in-place or as a separate activity, depending on the UI layout. Note how changes to fragment state (the currently shown details fragment) are retained across configuration changes for you by the framework.

public static class TitlesFragment extends ListFragment {
boolean mDualPane;
int mCurCheckPosition = 0;

@Override
public void onActivityCreated(Bundle savedState) {
super.onActivityCreated(savedState);

// Populate list with our static array of titles.
setListAdapter(new ArrayAdapter<String>(getActivity(),
R.layout.simple_list_item_checkable_1,
Shakespeare.TITLES));

// Check to see if we have a frame in which to embed the details
// fragment directly in the containing UI.
View detailsFrame = getActivity().findViewById(R.id.details);
mDualPane = detailsFrame != null
&& detailsFrame.getVisibility() == View.VISIBLE;

if (savedState != null) {
// Restore last state for checked position.
mCurCheckPosition = savedState.getInt("curChoice", 0);
}

if (mDualPane) {
// In dual-pane mode, list view highlights selected item.
getListView().setChoiceMode(ListView.CHOICE_MODE_SINGLE);
// Make sure our UI is in the correct state.
showDetails(mCurCheckPosition);
}
}

@Override
public void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
outState.putInt("curChoice", mCurCheckPosition);
}

@Override
public void onListItemClick(ListView l, View v, int pos, long id) {
showDetails(pos);
}

/**
* Helper function to show the details of a selected item, either by
* displaying a fragment in-place in the current UI, or starting a
* whole new activity in which it is displayed.
*/
void showDetails(int index) {
mCurCheckPosition = index;

if (mDualPane) {
// We can display everything in-place with fragments.
// Have the list highlight this item and show the data.
getListView().setItemChecked(index, true);

// Check what fragment is shown, replace if needed.
DetailsFragment details = (DetailsFragment)
getFragmentManager().findFragmentById(R.id.details);
if (details == null || details.getShownIndex() != index) {
// Make new fragment to show this selection.
details = DetailsFragment.newInstance(index);

// Execute a transaction, replacing any existing
// fragment with this one inside the frame.
FragmentTransaction ft
= getFragmentManager().beginTransaction();
ft.replace(R.id.details, details);
ft.setTransition(
FragmentTransaction.TRANSIT_FRAGMENT_FADE);
ft.commit();
}

} else {
// Otherwise we need to launch a new activity to display
// the dialog fragment with selected text.
Intent intent = new Intent();
intent.setClass(getActivity(), DetailsActivity.class);
intent.putExtra("index", index);
startActivity(intent);
}
}
}

For this first screen we need an implementation of DetailsFragment, which simply shows a TextView containing the text of the currently selected item.

public static class DetailsFragment extends Fragment {
/**
* Create a new instance of DetailsFragment, initialized to
* show the text at 'index'.
*/
public static DetailsFragment newInstance(int index) {
DetailsFragment f = new DetailsFragment();

// Supply index input as an argument.
Bundle args = new Bundle();
args.putInt("index", index);
f.setArguments(args);

return f;
}

public int getShownIndex() {
return getArguments().getInt("index", 0);
}

@Override
public View onCreateView(LayoutInflater inflater,
ViewGroup container, Bundle savedInstanceState) {
if (container == null) {
// Currently in a layout without a container, so no
// reason to create our view.
return null;
}

ScrollView scroller = new ScrollView(getActivity());
TextView text = new TextView(getActivity());
int padding = (int)TypedValue.applyDimension(
TypedValue.COMPLEX_UNIT_DIP,
4, getActivity().getResources().getDisplayMetrics());
text.setPadding(padding, padding, padding, padding);
scroller.addView(text);
text.setText(Shakespeare.DIALOGUE[getShownIndex()]);
return scroller;
}
}

It is now time to add another UI flow to our application. When in portrait orientation, there is not enough room to display the two fragments side-by-side, so instead we want to show only the list like this:

With the code shown so far, all we need to do here is introduce a new layout variation for portrait screens like so:

<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<fragment class="com.example.android.apis.app.TitlesFragment"
android:id="@+id/titles"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</FrameLayout>

The TitlesFragment will notice that it doesn’t have a container in which to show its details, so show only its list. When you tap on an item in the list we now need to go to a separate activity in which the details are shown.

With the DetailsFragment already implemented, the implementation of the new activity is very simple because it can reuse the same DetailsFragment from above:

public static class DetailsActivity extends FragmentActivity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

if (getResources().getConfiguration().orientation
== Configuration.ORIENTATION_LANDSCAPE) {
// If the screen is now in landscape mode, we can show the
// dialog in-line so we don't need this activity.
finish();
return;
}

if (savedInstanceState == null) {
// During initial setup, plug in the details fragment.
DetailsFragment details = new DetailsFragment();
details.setArguments(getIntent().getExtras());
getSupportFragmentManager().beginTransaction().add(
android.R.id.content, details).commit();
}
}
}

Put that all together, and we have a complete working example of an application that fairly radically changes its UI flow based on the screen it is running on, and can even adjust it on demand as the screen configuration changes.

This illustrates just one way fragments can be used to adjust your UI. Depending on your application design, you may prefer other approaches. For example, you could put your entire application in one activity in which you change the fragment structure as its state changes; the fragment back stack can come in handy in this case.

More information on the Fragment and FragmentManager APIs can be found in the Android 3.0 SDK documentation. Also be sure to look at the ApiDemos app under the Resources tab, which has a variety of Fragment demos covering their use for alternative UI flow, dialogs, lists, populating menus, retaining across activity instances, the back stack, and more.

Fragmentation for all!

For developers starting work on tablet-oriented applications designed for Android 3.0, the new Fragment API is useful for many design situations that arise from the larger screen. Reasonable use of fragments should also make it easier to adjust the resulting application’s UI to new devices in the future as needed -- for phones, TVs, or wherever Android appears.

However, the immediate need for many developers today is probably to design applications that they can provide for existing phones while also presenting an improved user interface on tablets. With Fragment only being available in Android 3.0, their shorter-term utility is greatly diminished.

To address this, we plan to have the same fragment APIs (and the new LoaderManager as well) described here available as a static library for use with older versions of Android; we’re trying to go right back to 1.6. In fact, if you compare the code examples here to those in the Android 3.0 SDK, they are slightly different: this code is from an application using an early version of the static library fragment classes which is running, as you can see on the screenshots, on Android 2.3. Our goal is to make these APIs nearly identical, so you can start using them now and, at whatever point in the future you switch to Android 3.0 as your minimum version, move to the platform’s native implementation with few changes in your app.

We don’t have a firm date for when this library will be available, but it should be relatively soon. In the meantime, you can start developing with fragments on Android 3.0 to see how they work, and most of that effort should be transferable.

[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]


Following on last week’s announcement of the Android 3.0 Preview SDK, I’d like to share some more good news with you about three important new features on Android Market.

Android Market on the Web


Starting today, we have extended Android Market client from mobile devices to every desktop. Anyone can now easily find and share applications from their favorite browser. Once users select an application they want, it will automatically be downloaded to their Android-powered devices over-the-air.

Android Market on the Web dramatically expands the discoverability of applications through a rich browsing experience, suggestion-guided searching, deep linking, social sharing, and other merchandising features.

We are releasing the initial version of Android Market on the Web in English and will be extending it to other languages in the weeks ahead.

If you have applications published on Android Market, we encourage you to visit the site and review how they are presented. If you need additional information about what assets you should provide, please visit Android Market Help Center.

You can access Android Market on the Web at:

http://market.android.com/

Buyer’s Currency

Android Market lets you sell applications to users in 32 buyer countries around the world. Today we’re introducing Buyer’s Currency to give you more control over how you price your products across those countries. This feature lets you price your applications differently in each market and improves the purchase experience for buyers by showing prices in their home currencies.

We’ll be rolling out Buyer’s Currency in stages, starting with developers in the U.S. and reaching developers in other countries shortly after. We anticipate it will take approximately four months for us to complete this process.

We encourage you to watch for the appearance of new Buyer’s Currency options in the Android Market publishing console and set prices as soon as possible.

In-app Billing

After months of hard work by the Android Market team, I am extremely pleased to announce the arrival of In-app Billing on Android Market. This new service gives developers more ways to monetize their applications through new billing models including try-and-buy, virtual goods, upgrades, and more.

The In-app Billing service manages billing transactions between apps and users, providing a consistent purchasing experience with familiar forms of payment across all apps. At the same time, it gives you full control over how your digital goods are purchased and tracked. You can let Android Market manage and track the purchases for you or you can integrate with your own back-end service to verify and track purchases in the way that's best for your app.

We’ll be launching In-app Billing in stages. Beginning today, we are providing detailed documentation and a sample application to help you get familiar with the service. Over the next few weeks we’ll be rolling out updates to the Android Market client that will enable you to test against the In-app Billing service. Before the end of this quarter, the service will be live for users, to enable you to start monetizing your applications with this new capability. For complete information about the rollout, see the release information in the In-app Billing documentation.

Helping developers merchandise and monetize their products is a top priority for the Android Market team. We will continue to work hard to to make it the best marketplace for your to distribute your products. For now, we hope you’ll check out these new features to help you better deliver your products through Android Market.

Android 3.0 (Honeycomb) is a new version of the Android platform that is designed from the ground up for devices with larger screen sizes, particularly tablets. It introduces a new “holographic” UI theme and an interaction model that builds on the things people love about Android — multitasking, notifications, widgets, and others — and adds many new features as well.

Besides the user-facing features it offers, Android 3.0 is also specifically designed to give developers the tools and capabilities they need to create great applications for tablets and similar devices, together with the flexibility to adapt existing apps to the new UI while maintaining compatibility with earlier platform versions and other form-factors.

Today, we are releasing a preview of the Android 3.0 SDK, with non-final APIs and system image, to allow developers to start testing their existing applications on the tablet form-factor and begin getting familiar with the new UI patterns, APIs, and capabilties that will be available in Android 3.0.

Here are some of the highlights:

UI framework for creating great apps for larger screen devices: Developers can use a new UI components, new themes, richer widgets and notifications, drag and drop, and other new features to create rich and engaging apps for users on larger screen devices.

High-performance 2D and 3D graphics: A new property-based animation framework lets developers add great visual effects to their apps. A built-in GL renderer lets developers request hardware-acceleration of common 2D rendering operations in their apps, across the entire app or only in specific activities or views. For adding rich 3D scenes, developers take advantage of a new 3D graphics engine called Renderscript.

Support for multicore processor architectures: Android 3.0 is optimized to run on either single- or dual-core processors, so that applications run with the best possible performance.

Rich multimedia: New multimedia features such as HTTP Live streaming support, a pluggable DRM framework, and easy media file transfer through MTP/PTP, give developers new ways to bring rich content to users.

New types of connectivity: New APIs for Bluetooth A2DP and HSP let applications offer audio streaming and headset control. Support for Bluetooth insecure socket connection lets applications connect to simple devices that may not have a user interface.

Enhancements for enterprise: New administrative policies, such as for encrypted storage and password expiration, help enterprise administrators manage devices more effectively.

For an complete overview of the new user and developer features, see the Android 3.0 Platform Highlights.

Additionally, we are releasing updates to our SDK Tools (r9), NDK (r5b), and ADT Plugin for Eclipse (9.0.0). Key features include:

  • UI Builder improvements in the ADT Plugin:
    • Improved drag-and-drop in the editor, with better support for included layouts.
    • In-editor preview of objects animated with the new animation framework.
    • Visualization of UI based on any version of the platform. independent of project target. Improved rendering, with better support for custom views.

To find out how to get started developing or testing applications using the Android 3.0 Preview SDK, see the Preview SDK Introduction. Details about the changes in the latest versions of the tools are available on the SDK Tools, the ADT Plugin, and NDK pages on the site.

Note that applications developed with the Android 3.0 Platform Preview cannot be published on Android Market. We’ll be releasing a final SDK in the weeks ahead that you can use to build and publish applications for Android 3.0.

[This post is by Bruno Albuquerque, an engineer who works in Google’s office in Belo Horizonte, Brazil. —Tim Bray]

One of the things that I find most interesting and powerful about Android is the concept of broadcasts and their use through the BroadcastReceiver class (from now on, we will call implementations of this class “receivers”). As this document is about a very specific usage scenario for broadcasts, I will not go into detail about how they work in general, so I recommend reading the documentation about them in the Android developer site. For the purpose of this document, it is enough to know that broadcasts are generated whenever something interesting happens in the system (connectivity changes, for example) and you can register to be notified whenever one (or more) of those broadcasts are generated.

While developing Right Number, I noticed that some developers who create receivers for ordered broadcasts do not seem to be fully aware of what is the correct way to do it. This suggests that the documentation could be improved; in any case, things often still work(although it is mostly by chance than anything else).

Non-ordered vs. Ordered Broadcasts

In non-ordered mode, broadcasts are sent to all interested receivers “at the same time”. This basically means that one receiver can not interfere in any way with what other receivers will do neither can it prevent other receivers from being executed. One example of such broadcast is the ACTION_BATTERY_LOW one.

In ordered mode, broadcasts are sent to each receiver in order (controlled by the android:priority attribute for the intent-filter element in the manifest file that is related to your receiver) and one receiver is able to abort the broadcast so that receivers with a lower priority would not receive it (thus never execute). An example of this type of broadcast (and one that will be discussing in this document) is the ACTION_NEW_OUTGOING_CALL one.

Ordered Broadcast Usage

As mentioned earlier in this document, the ACTION_NEW_OUTGOING_CALL broadcast is an ordered one. This broadcast is sent whenever the user tries to initiate a phone call. There are several reasons that one would want to be notified about this, but we will focus on only 2:

  • To be able to reject an outgoing call;

  • To be able to rewrite the number before it is dialed.

In the first case, an app may want to control what numbers can be dialed or what time of the day numbers can be dialed. Right Number does what is described in the second case so it can be sure that a number is always dialed correctly no matter where in the world you are.

A naive BroadcastReceiver implementation would be something like this (note that you should associate this receiver with the ACTION_NEW_OUTGOING_CALL broadcast in the manifest file for your application):

public class CallReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
// Original phone number is in the EXTRA_PHONE_NUMBER Intent extra.
String phoneNumber = intent.getStringExtra(Intent.EXTRA_PHONE_NUMBER);

if (shouldCancel(phoneNumber)) {
// Cancel our call.
setResultData(null);
} else {
// Use rewritten number as the result data.
setResultData(reformatNumber(phoneNumber));
}
}

The receiver either cancels the broadcast (and the call) or reformats the number to be dialed. If this is the only receiver that is active for the ACTION_NEW_OUTGOING_CALL broadcast, this will work exactly as expected. The problem arrises when you have, for example, a receiver that runs before the one above (has a higher priority) and that also changes the number as instead of looking at previous results of other receivers, we are just using the original (unmodified) number!

Doing It Right

With the above in mind, here is how the code should have been written in the first place:

public class CallReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
// Try to read the phone number from previous receivers.
String phoneNumber = getResultData();

if (phoneNumber == null) {
// We could not find any previous data. Use the original phone number in this case.
phoneNumber = intent.getStringExtra(Intent.EXTRA_PHONE_NUMBER);
}

if (shouldCancel(phoneNumber)) {
// Cancel our call.
setResultData(null);
} else {
// Use rewritten number as the result data.
setResultData(reformatNumber(phoneNumber));
}
}

We first check if we have any previous result data (which would be generated by a receiver with a higher priority) and only if we can not find it we use the phone number in the EXTRA_PHONE_NUMBER intent extra.

How Big Is The Problem?

We have actually observed phones with a priority 0 receiver for the NEW_OUTGOING_CALL intent installed out of the box (this will be the last one that is called after all others) that completely ignores previous result data which means that, in effect, they disable any useful processing of ACTION_NEW_OUTGOING_CALL (other than canceling the call, which would still work). The only workaround for this is to also run your receiver at priority 0, which works due to particularities of running 2 receivers at the same priority but, by doing that, you break one of the few explicit rules for processing outgoing calls:

“For consistency, any receiver whose purpose is to prohibit phone calls should have a priority of 0, to ensure it will see the final phone number to be dialed. Any receiver whose purpose is to rewrite phone numbers to be called should have a positive priority. Negative priorities are reserved for the system for this broadcast; using them may cause problems.”

Conclusion

There are programs out there that do not play well with others. Urge any developers of such programs to read this post and fix their code. This will make Android better for both developers and users.

Notes About Priorities

  • For the NEW_OUTGOING_CALL intent, priority 0 should only be used by receivers that want to reject calls. This is so it can see changes from other receivers before deciding to reject the call or not.

  • Receivers that have the same priority will also be executed in order, but the order in this case is undefined.

  • Use non-negative priorities only. Negative ones are valid but will result in weird behavior most of the time.

[The first part of this post is by Reto Meier. —Tim Bray]

From c-base in Berlin to the Ice Bar in Stockholm, from four courses of pasta in Florence to beer and pretzels in Munich, and from balalikas in Moscow to metal cage mind puzzles in Prague - one common theme was the enthusiasm and quality of the Android developers in attendance. You guys are epic.

For those of you who couldn't join us, we're in the middle of posting all the sessions we presented during this most recent world tour. Stand by for links.

Droidcon UK

We kicked off our conference season at Droidcon UK, an Android extravaganza consisting of a bar camp on day 1 and formal sessions on day 2. It was the perfect place for the Android developer relations team to get together and kick off three straight weeks of Google Developer Days, GTUG Hackathons, and Android Developer Labs.

Android Developer Labs

The first of our Android Developer Labs was a return to Berlin: home to c-base (a place we never got tired of) and the Beuth Hochschule für Technik Berlin. This all day event cost me my voice, but attracted nearly 300 developers (including six teams who battled it out to win a Lego Mindstorm for best app built on the day.)

Next stop was Florence which played host to our first Italian ADL after some fierce campaigning by local Android developers. 160 developers from all over Italy joined us in beautiful Florence where the Firenze GTUG could not have been more welcoming. An afternoon spent with eager developers followed up by an evening of real Italian pasta - what's not to love?

From the warmth of Florence to the snow of Stockholm where we joined the Stockholm GTUG for a special Android themed event at Bwin Games. After a brief introduction we split into six breakout sessions before the attendees got down to some serious hacking to decide who got to bring home the Mindstorm kit.

Google Developer Days

The Google Developer Days are always a highlight on my conference schedule, and this year's events were no exception. It's a unique opportunity for us to meet with a huge number of talented developers - over 3,000 in Europe alone. Each event featured a dedicated Android track with six sessions designed to help Android developers improve their skills.

It was our first time in Munich where we played host to 1200 developers from all over Germany. If there was any doubt we'd come to the right place, the hosting of the Blinkendroid Guinness World Record during the after-party soon dispelled it.

Moscow and Prague are always incredible places to visit. The enthusiasm of the nearly 2,500 people who attended is the reason we do events like these. You can watch the video for every Android session from the Prague event and check out the slides for each of the Russian sessions too.

GTUG Hackathons

With everyone in town for the GDDs we wanted to make the most it. Working closely with the local GTUGs, the Android and Chrome teams held all-day hackathon bootcamps in each city the day before the big event.

It was a smaller crowd in Moscow, but that just made the competition all the more fierce. So much so that we had to create a new Android app just for the purpose of measuring the relative volume of applause in order to choose a winner.

If a picture is a thousand words, this video of the Prague Hackathon in 85 seconds will describe the event far better than I ever could. What the video doesn't show is that the winners of "best app of the day" in Prague had never developed for Android before.

In each city we were blown away by the enthusiasm and skill on display. With so many talented, passionate developers working on Android it's hard not to be excited by what we'll find on the Android Market next. In the mean time, keep coding; we hope to be in your part of the world soon.

On To South America

[Thanks, Reto. This is Tim again. The South American leg actually happened before the Eurotour, but Reto got his writing done first, so I'll follow up here.]

We did more or less the same set of things in South America immediately before Reto’s posse fanned out across Europe. Our events were in São Paulo, Buenos Aires, and Santiago; we were trying to teach people about Android and I hope we succeeded. On the other hand, I know that we learned lots of things. Here are a few of them:

  • Wherever we went, we saw strange (to us) new Android devices. Here’s a picture of a Brazilian flavor of the Samsung Galaxy S, which comes with a fold-out antenna and can get digital TV off the air. If you’re inside you might need to be near a window, but the picture quality is fantastic.

  • There’s a conventional wisdom about putting on free events: Of the people who register, only a certain percentage will show up. When it comes to Android events in South America, the certain-percentage part is wrong. As a result, we dealt with overcrowded rooms and overflow arrangements all over the place. I suppose this is a nice problem to have, but we still feel sorry about some of the people who ended up being overcrowded and overflowed.

  • Brazilians laugh at themselves, saying they’re always late. (Mind you, I’ve heard Indians and Jews and Irish people poke the same fun at themselves, so I suspect lateness may be part of the human condition). Anyhow, Brazilians are not late for Android events; when we showed up at the venue in the grey light of dawn to start setting up, they were already waiting outside.

  • I enjoyed doing the hands-on Android-101 workshops (I’ve included a picture of one), but I’m not sure Googlers need to be doing any more of those. Wherever you go, there’s now a community of savvy developers who can teach each other through the finer points of getting the SDK installed and working and “Hello World” running.

  • Brazil and Argentina and Chile aren’t really like each other. But each has its own scruffy-open-source-geek contingent that likes to get together, and Android events are a good opportunity. I felt totally at home drinking coffee with these people and talking about programming languages and screen densities and so on, even when we had to struggle our way across language barriers.

The people were so, so, warm-hearted and welcoming and not shy in the slightest and I can’t think about our tour without smiling. A big thank-you to all the South-American geeks and hackers and startup cowboys; we owe you a return visit.

[This post is by Chris Pruett, an outward-facing Androider who focuses on the world of games. —Tim Bray]

We released the first version of the Native Development Kit, a development toolchain for building shared libraries in C or C++ that can be used in conjunction with Android applications written in the Java programming language, way back in July of 2009. Since that initial release we’ve steadily improved support for native code; key features such as OpenGL ES support, debugging capabilities, multiple ABI support, and access to bitmaps in native code have arrived with each NDK revision. The result has been pretty awesome: we’ve seen huge growth in certain categories of performance-critical applications, particularly 3D games.

These types of applications are often impractical via Dalvik due to execution speed requirements or, more commonly, because they are based on engines already developed in C or C++. Early on we noted a strong relationship between the awesomeness of the NDK and the awesomeness of the applications that it made possible; at the limit of this function is obviously infinite awesomeness (see graph, right).

With the latest version of the NDK we intend to further increase the awesomeness of your applications, this time by a pretty big margin. With NDK r5, we’re introducing new APIs that will allow you to do more from native code. In fact, with these new tools, applications targeted at Gingerbread or later can be implemented entirely in C++; you can now build an entire Android application without writing a single line of Java.

Of course, access to the regular Android API still requires Dalvik, and the VM is still present in native applications, operating behind the scenes. Should you need to do more than the NDK interfaces provide, you can always invoke Dalvik methods via JNI. But if you prefer to work exclusively in C++, the NDK r5 will let you build a main loop like this:

void android_main(struct android_app* state) {
// Make sure glue isn't stripped.
app_dummy();

// loop waiting for stuff to do.
while (1) {
// Read all pending events.
int ident;
int events;
struct android_poll_source* source;

// Read events and draw a frame of animation.
if ((ident = ALooper_pollAll(0, NULL, &events,
(void**)&source)) >= 0) {
// Process this event.
if (source != NULL) {
source->process(state, source);
}
}
// draw a frame of animation
bringTheAwesome();
}
}

(For a fully working example, see the native-activity sample in NDK/samples/native-activity and the NativeActivity documentation.)

In addition to fully native applications, the latest NDK lets you play sound from native code (via the OpenSL ES API, an open standard managed by Khronos, which also oversees OpenGL ES), handle common application events (life cycle, touch and key events, as well as sensors), control windows directly (including direct access to the window’s pixel buffer), manage EGL contexts, and read assets directly out of APK files. The latest NDK also comes with a prebuilt version of STLport, making it easier to bring STL-reliant applications to Android. Finally, r5 adds backwards-compatible support for RTTI, C++ exceptions, wchar_t, and includes improved debugging tools. Clearly, this release represents a large positive ∆awesome.

We worked hard to increase the utility of the NDK for this release because you guys, the developers who are actually out there making the awesome applications, told us you needed it. This release is specifically designed to help game developers continue to rock; with Gingerbread and the NDK r5, it should now be very easy to bring games written entirely in C and C++ to Android with minimal modification. We expect the APIs exposed by r5 to also benefit a wide range of media applications; access to a native sound buffer and the ability to write directly to window surfaces makes it much easier for applications implementing their own audio and video codecs to achieve maximum performance. In short, this release addresses many of the requests we’ve received over the last year since the first version of the NDK was announced.

We think this is pretty awesome and hope you do too.

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget