Powered By Blogger

Montag, 18. November 2013

ABOUT THE NEW ART FEATURE (DALVIK REPLACEMENT) ON KitKat

About ART-Dalvik replacement-feature which comes with Android 4.4 KitKat

It's fair to say that Android went through some chaotic years in the beginning. The pace of development was frantic as the operating system grew at an unprecedented rate. An as-yet undetermined future led to decisions that were made to conform to existing hardware and architectures, the available development tools, and the basic need to ship working code on tight deadlines. Now that the OS has matured, the Android team has been giving more attention to some of the components that haven't aged quite as well. One of the oldest pieces of the Android puzzle is the Dalvik runtime, the software responsible for making most of your apps run. That's why Google's developers have been working for over 2 years on ART, a replacement for Dalvik that promises faster and more efficient execution, better battery life, and a more fluid experience.

What Is ART?
ART, which stands for Android Runtime, handles app execution in a fundamentally different way from Dalvik. The current runtime relies on a Just-In-Time (JIT) compiler to interpret bytecode, a generic version of the original application code. In a manner of speaking, apps are only partially compiled by developers, then the resulting code must go through an interpreter on a user's device each and every time it is run. The process involves a lot of overhead and isn't particularly efficient, but the mechanism makes it easy for apps to run on a variety of hardware and architectures. ART is set to change this process by pre-compiling that bytecode into machine language when apps are first installed, turning them into truly native apps. This process is called Ahead-Of-Time (AOT) compilation. By removing the need to spin up a new virtual machine or run interpreted code, startup times can be cut down immensely and ongoing execution will become faster, as well.

At present, Google is treating ART as an experimental preview, something for developers and hardware partners to try out. Google's own introduction of ART clearly warns that changing the default runtime can risk breaking apps and causing system instability. ART may not be completely ready for prime time, but the Android team obviously feels like it should see the light of day. If you're interested in trying out ART for yourself, go to Settings -> Developer options -> Select runtime. Activating it requires a restart to switch from libdvm.so to libart.so, but be prepared to wait about 10 minutes on the first boot-up while your installed apps are prepared for the new runtime. Warning: Do not try this with the Paranoid Android (or other AOSP) build right now. There is an incompatibility with the current gapps package that causes rapid crashing, making the interface unusable.

How Much Better Is It?
For now, the potential gains in efficiency are difficult to gauge based on the version of ART currently shipping with KitKat, so it isn't representative of what will be possible once it has been extensively optimized. Thus far, estimates and some benchmarks suggest that the new runtime is already capable of cutting execution time in half for most applications. This means that long-running, processor-intensive tasks will be able to finish faster, allowing the system to idle more often and for longer. Regular applications will also benefit from smoother animations and more instantaneous responses to touch and other sensor data. Additionally, now that the typical device contains a quad-core (or greater) processor, many situations will call for activating fewer cores, and it may be possible to make even better use of the lower-powered cores in ARM's big.LITTLE architecture. How much this improves battery life and performance will vary quite a bit based on usage scenarios and hardware, but the results could be substantial.

What Are The Compromises?
There are a couple of drawbacks to using AOT compilation, but they are negligible compared to the advantages. To begin with, fully compiled machine code will usually consume more storage space than that of bytecode. This is because each symbol in bytecode is representative of several instructions in machine code. Of course, the increase in size isn't going to be particularly significant, not usually more than 10%-20% larger. That might sound like a lot when APKs can get pretty large, but the executable code only makes up a fraction of the size in most apps. For example, the latest Google+ APK with the new video editing features is 28.3 MB, but the code is only 6.9 MB. The other likely notable drawback will come in the form of a longer install time for apps - the side effect of performing the AOT compilation. How much longer? Well, it depends on the app; small utilities probably won't even be noticed, but the more complex apps like Facebook and Google+ are going to keep you waiting. A few apps at a time probably won't bother you, but converting more than 100 apps when you first switch to ART is a serious test of patience. This isn't entirely bad, as it allows the AOT compiler to work a little harder to find even more optimizations than the JIT compiler ever had the opportunity to look for. All in all, these are sacrifices I'm perfectly happy to make if it will bring an otherwise more fluid experience and increased battery life.
Overall, ART sounds like a pretty amazing project, one that I hope to see as a regular part of Android sooner rather than later. The improvements are likely to be pretty amazing while the drawbacks should be virtually undetectable. There is a lot more than I could cover in just this post alone, including details on how it works, benchmarks, and a lot more. I'll be diving quite a bit deeper into ART over the next few days, so keep an eye out!





By now you've probably heard about ART and how it will improve the speed and performance of Android, but how does it actually perform today? The new Android Runtime promises to cut out a substantial amount of overhead by losing the baggage imposed by Dalvik, which sounds great, but it's still far from mature and hasn't been seriously optimized yet. I took to running a battery of benchmarks against it to find out if the new runtime could really deliver on these high expectations. ART is definitely showing some promise, but I have to warn you that you probably won't be impressed with the results you'll see here today.

Reality Check
Let's be honest, benchmarking apps tend to be inaccurate and unreliable, often giving wildly varying results even when run in precisely identical situations. However, they are the only option available for recording meaningful and measurable values on performance. Further, since most popular benchmarks are built on the NDK (Native Development Kit), they won't gain any benefit from running under ART. Despite these limitations, there are some interesting and unexpected results that help us learn a little more about the current state of performance.

How The Benchmarks Were Run
Each benchmark was run at least 4 times on a completely stock Nexus 5 (it isn't even rooted) with both Dalvik and ART. To ensure there was no interference from apps at startup, a minimum of 5 minutes was given after a reboot before any tests were run. In addition to the 6 benchmarking apps listed below, I also tried 2 browser benchmarks (SunSpider & BrowserMark) in Chrome, but neither displayed significantly different scores. So, let's get to the results.
Linpack for Android
One of the key factors in getting good test results is knowing that the tools are measuring the right thing. While many of the benchmark apps target the NDK, a few stick to the SDK. The first and most consistent among them is Linpack for Android, a port of the already popular benchmarking app used throughout numerous computing platforms. It produces a score by performing a series of calculations on floating point numbers. I think this is an obvious choice after reading the description, "This test is more a reflection of the state of the Android Dalvik Virtual Machine than of the floating point performance of the underlying processor." Thanks to ART, scores are 10%-14% higher than they would be with Dalvik. Not too shabby…

Real Pi Benchmark
Calculating digits of Pi is another popular way of stressing a processor, and particularly suitable because most methods stick to integer calculations and avoid floating-point math entirely. Along with Linpack, this gives us coverage of both basic mathematical operations. On top of it, Real Pi happens to use native code to perform the AGM+FFT formula, but uses Java for Machin's formula. On the native side, ART came out about 3.5% faster, probably due to interface optimizations rather than mathematical performance. More importantly, testing with the java code turned out to be 12% faster. (link) Note: in this test, lower numbers are better.

Quadrant Standard
The previous tests are highly specific to mathematical performance, so it's time to branch out to test more of the system. Both Linpack and Real Pi show some positive improvement with ART, but Quadrant gave a result that borders on the amazing, perhaps even too good. The CPU score is off the charts for ART, almost doubling that of Dalvik, which is substantially better than even the most optimistic estimates we've heard so far... While tests for I/O, 2D, and 3D rendering show fairly negligible differences, Dalvik does take an oddly high 9% advantage in the memory test.

3D Mark
I was leery of using a benchmarking app that clearly focuses on the NDK, as it theoretically shouldn't be affected very much by ART. However, as the tests were run, an interesting pattern emerged where the Dalvik runtime repeatedly held a slight advantage. It's difficult to attribute a reason for Dalvik to do better here, but I'm open to theories.

AnTuTu Benchmark
Breaking performance down even further, AnTuTu helps to expose a pattern. It's increasingly clear that ART is making significant strides with floating-point operations, but doesn't usually turn out huge gains for integers. A strong showing in "RAM Operation" also hints at better use of caching as opposed to just raw memory I/O. These high scores indicate areas where the Dalvik virtual machine was probably very expensive, causing more extensive overhead. The other results weren't particularly remarkable except for the Storage I/O, which might suggest a couple of specific optimizations. One significantly low score appears for UX Dalvik, but it's not clear what AnTuTu is measuring, so this may not be particularly relevant.

CF-Bench
For the ultimate in number production, Chainfire's own benchmark tool takes out a lot of the guesswork by performing tests built on both the SDK and NDK. Again, native code displays a small but curious advantage on Dalvik. Here we can see the integer calculations are swinging back towards Dalvik, as well. Mostly confirming the pattern, floating-point operations demonstrate a significant speed gain, this time in the 23%-33% range.

Other Interesting Measurements
Measuring the first boot after switching runtimes isn't your typical test, no doubt, but the time it takes is quite striking. I wanted to record just how long it took to complete both the App Optimization step and then the total time to actually reach the unlock screen. When I ran this test, I had 149 apps installed.

The Other Stuff
While numbers can be helpful, they don't tell the full story. Benchmarks usually push the hardware to work as hard as possible for a few seconds, then switch to a new test that does the same thing. Sadly, this ignores details that aren't easily measured. I don't have a good way to measure the smarter timing of memory management (especially garbage collection) or better handling of multiple threads. While I can't show numbers for these things, I can demonstrate them. The classic test for a browser simply requires flinging the page as fast as possible and watching it try to keep up. After stress testing Chrome for Android with the mobile version of David's gigantic HTC One review, it turns out that even the supercharged SoC of the Nexus 5 can't quite keep up while running on Dalvik… ART, on the other hand, never lost a pixel. Take a look for yourselves.Videos below.

To be fair, switching to the desktop version and giving a single fling will easily send you into blank screen territory, but it's still obvious that the renderer catches up faster on ART than on Dalvik. When more optimizations are in place, maybe we won't be far off from flawless scrolling even in the desktop version. For another demonstration, a user by the name of spogbiper has posted his own side-by-side comparison with two Nexus 7s. The one running ART seems to be more responsive.

YOUTUBE LINKS



Summary And Conclusions
The numbers and the videos together paint a picture of where ART stands today. It will definitely make a difference, but its current incarnation just hasn't matured enough to deliver significant gains. Floating-point calculations and basic responsiveness are obviously reaping the benefits of the new runtime, but that's about it. There's little or no overall improvement for integer calculations, most regular code execution, or much of anything else. In fact, it looks like gamers would be better served by sticking to Dalvik, for now.
Why aren't the benchmarks blowing us away? If I were to make a guess, it's probably because the first goal in developing ART was to make sure it was functional and stable before the heavy optimizations came into effect. If that's the case, there is probably quite a bit of code for error-checking and logging just to ensure everything is operating as it should, which might even be responsible for more overhead than we had with Dalvik. Even in the places where ART doesn't outperform Dalvik, the numbers tend to remain reasonably close. As subsequent versions of the runtime emerge from Mountain View, we should expect to see the performance gap growing wider as ART pulls ahead.
Now for the real question: is it worth switching to ART right now? Google obviously isn't recommending it for regular users, and I tend to agree. While ART seems very solid and I feel like responsiveness is better - possibly just the placebo effect - there are still circumstances where it is unstable and causes apps to crash. If there is even a single instance where you have to switch back to Dalvik to get an app to run correctly, that inconvenience far outweighs the minimal performance gain you might have had. Once I've finished this series, I will probably stick to Dalvik for the remainder of KitKat; and I imagine most people will be better served by doing the same.

Introducing ART from: source.android.com

ART is a new Android runtime being introduced experimentally in the 4.4 release. This is a preview of work in progress in KitKat that can be turned on in Settings > developer options. This is available for the purpose of obtaining early developer and partner feedback.

Important: Dalvik must remain the default runtime or you risk breaking your Android implementations and third-party applications.

Two runtimes are now available, the existing Dalvik runtime (libdvm.so) and the ART (libart.so). A device can be built using either or both. (You can dual boot from Developer options if both are installed.)

The dalvikvm command line tool can run with either of them now. See runtime_common.mk. That is included from build/target/product/runtime_libdvm.mk or build/target/product/runtime_libdvm.mk or both.

A new PRODUCT_RUNTIMES variable controls which runtimes are included in a build. Include it within either build/target/product/core_minimal.mk or build/target/product/core_base.mk.

Add this to the device makefile to have both runtimes built and installed, with Dalvik as the default:
PRODUCT_RUNTIMES := runtime_libdvm_default
PRODUCT_RUNTIMES += runtime_libart

                                       
                                               MANY GREEEETZ+STAY ADDICTED!!!!
                                                             -CALIBAN666-


Keine Kommentare:

Kommentar veröffentlichen