For my final CS148 project, I wrote a nearly-realtime ray tracer for Android. If you're running Android 4.0 or higher (Ice Cream Sandwich), you can download the completed tech demo from the Android Play Store.
Using the Android NDK, I started out by implementing a simple ray tracing algorithm in native C++. Once the basic functionality was complete, I tried to maximize performance by introducing multi-threading, interlacing, acceleration structures, and adaptive sampling. Thanks to pthreads, the application can take advantage of powerful multi-core mobile processors. Interlacing increases the framerate by skipping half of the pixels on each frame. As a result of persistence-of-vision, this effectively doubles the framerate while adding only a bit of artifacting.
My acceleration structure is currently extremely basic: it merely calculates the bounding-square around each sphere, and eliminates the viewing rays that fall outside. While this has the advantage of requiring very few instructions to determine intersection of viewing rays, it does not affect reflection rays. Thus, implementing a better acceleration structure is definitely a top priority for improving performance.
Adaptive sampling is an algorithm I wrote to minimize the recalculation of unchanged pixels. In a mostly-static scene, this greatly improves performance
As a result of these optimizations, my raytracer runs at close to real-time on a multi-core Android device (between 14 and 20 interlaced FPS on my Nexus 7). At this speed, you can poke the spheres and watch them respond, all while the render calculation is occurring in the background.
There is plenty of room for improvement: I'd like to add an improved acceleration structure, and I also have a few other ideas to improve performance. I'd also like to experiment with raytracing on the GPU, since the application is currently entirely CPU-bound. Please let me know if you're interested in the code, or if you're interesting in helping! Again, you can try out the application on the Play Store.