So what is Visual Enhancement Engine?

This post is on VEE, the Visual Enhancement Engine for Image/Video Processing.

Displays are ubiquitous and we use them everywhere, under bright sun-light, varying ambient light and in no lighting condition.

To specifically target a part of this issue, viewing in sun-light, trans-flective screens were adopted for the first generation. What about the other lighting conditions? The PQ screens were very usable when you are outside, but once indoors, in little high, normal or low lighting condition, everyone felt bit a compromised on the color saturation. The more you use the devices, more you realize that the time you spend in direct sun-light is not the major use case, and then this little compromise suddenly becomes uncomfortable! All you are now left with is a near mono-chrome display in direct sun-light and washed out color feel in ambient lights. Good solution, but not good enough for how tablets are being used as of now.

Other solution to all the above problems was to record the ambient light using ambient light sensor, and bump up the backlight brightness or power display. Unfortunately doing so increases power consumption significantly, diminishing battery life.

To resolve specially this issue, Adam II comes with a Video Enhancement Engine on-board.

This engine delivers television-quality visual experience by adapting display data, in real-time, to imrpove the ability to view videos under low backlight or in bright ambient light conditions. It enhances the image and video quality by compressing the dynamic range to match the characteristics of the display, resulting in a better viewing experience.

The system is based on the Orthogonal Retina-Morphic Image Transform (ORMIT) algorithm. It is a sophisticated method of dynamic range compression which differs from conventional methods such as gamma correction in that it applies different tonal and color transformations to every pixel in an image. These algorithms implement a model of human perception, which results in a displayed image that retains details, color and vitality even under different viewing conditions. ORMIT was developed as a result into biological visual systems, with particular emphasis on the humans.

Simply put, the display will be tuned in most lighting condition for different sort of images, and right set of parameters are acquired for this algorithm. Now when the devices on field, experiences a certain lighting conditions type and an image/video, it can quickly change the image properties and increase visual quality.

Here is how it works:

OMAP reads data from the memory, it can be an image, or video. The Ambient Light Sensor sends measured light values to OMAP (so it can control VDO to be covered in the next blog) and VEE. OMAP has DSI out, which should be converted to LVDS signals so the display can read data. Instead on Adam II, this DSI out is sent to the VEE which does it visual enhancement real time and outs the data as LVDS which the screen can now read. VEE also takes the ambient light signal values as one of its parameters. You can see on the extreme right how a display might look when VEE is off and when it is on. This is a photoshopped image and doesn’t do justice to the actual performance. Once ready, I will share videos on this, comparing the best know devices around. What is missing in the picture picture above is the DPO.

This Visual Enhancement Engine comes on board with a Display Power Optimizer (DPO), both of which are a part of a single brilliant package developed by one of our partners (who also holds the trademarks for VEE and DPO). In the next blog we will introduce our partner and what exactly DPO does, and most importantly what it means for the overall power optimizations and visual experience on Adam II.

Warm Regards

Rohan Shravan


The Screen and the Battery


This post took longer time since the launch of another disruptive product from THE APPLE needs far more introspection that otherwise required.

It’s something we dwelled on while finalizing the specifications for the upcoming Adam. The option we had was 1920×1200 in 10 inch for the same thickness. Only thing which was to be handled was to move the LCD driving circuit to the main mother board, which was also in favor of what we wanted to do since we have a very special VEE (Visual Enhancement Engine) for controlling the display. But, there were 2 problems, power requirement and availability of the apps to run on such a resolution.

Our specs want the second to be charged in under 2.5 hours. With 25Whr battery, that would mean 2A on 5V. 3 important observations here. First, it can be charged with a normal laptop drawing around 500mA (using a micro USB cable). Second, any mobile charger with mirco-USB charger can charge it (unlike proprietary charger everyone needs to carry for most of the tablets, including 1). And third, leads to reduction in number of components. Micro-USB charging is much closer to EU’s charging specifications.

Bumping up the resolution would need 45Whr battery. The new iPad has it. Some hard decisions made were, heavier devices (more battery = more weight), more heat and worse more charging time. A 10W charger will charge it in ideally 4.5 hours. If you connect it with Mac it will take more than 18-20 hours! (5V and 500mA). All of this was a BIG no for us, but doens’t seem to be so for new iPad’s users.

Other was the availability of the applications. If you don’t have applications to make use of it, looking only at a beautiful homescreen won’t make sense. Just to give you an idea, even 1080p movie on this screen needs to be scaled up!

Higher-resolution screen is there on our roadmap, but not yet. Android eco-system is not ready, and instead of bumping up the battery to support higher res screen, we’d rather focus on how to cut the battery even further to support a higher resolution screen.

In the next blog I will introduce the VEE and a DPO (Display Power Optimizer) technologies which are the part of our next product. If implemented correctly it can result an improvement in between 20-40% on LCD power consumption.


Rohan Shravan

The Green Android Planet

The Green Android Planet

Warm Regards

Rohan Shravan

Cores and Mr. Amdahl

Hello Everyone,

This is in continuation of the last week’s post: LINK.

Following last post, we all can see a lot of reference on 4 Cores vs 2 Cores. 2 Cores does look underwhelming, right? Let me ask you another question, there is a bike with 2 tyres and a car with 4 tyres, which one will go faster? Answer for both the question somewhat related.

What do you think Mr. Amdahl?

Amdahl's Law

“If for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm’s operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable), Amdahl’s law states that the maximum speedup of the parallelized version is 1/(1 – 0.12) = 1.136 times as fast as the non-parallelized implementation.” – WikiPedia

Let’s check this image:

Image Source -

You can see the 4 cores here, but you can also see HD Video Decoder and Encoder blocks. Now let’s see this image:

Source - The Verve

Observe few things here. We are comparing 2 (dual) vs 3 (quad) here (for power). We are focusing on Video power saving, which should be handled by HD Video Blocks. There is a separate Audio, Image, HDMI, Display and really awesome GPU Blocks.

So where does the 4 cores actually help? Or to be contextually correct, where have we added parallelism to use these 4 cores? More over there are other SIMD which when code is optimized (NEON) does acceleration. There is this brilliant article on “death of cpu scaling” which you must read. I think it will be safe to say in current context that adding more cores to GPU will make much more sense unless we see OS or an API which lets developer use these cores.

If all the video, audio, imaging, graphics, etc processing requirements are taken away from CPU, it must now be mostly responsible for the Operating System Demands (you are aware that ICS uses Hardware Acceleration for its User Interface, which is again another block outside CPU). Android must have the answer for this.

RenderScript is finally (Cuda is still not available on embedded devices)  (Dalvik is Closed source, so I can’t comment on it) a great way of using the Parallelism available with the number of increase in cores. RS does two tasks, compute and graphics. Graphics is on GPU and Compute is on Core. So if you want to compute then you can use these cores. Current applications would be linear algebra, Fourier transforms, n-body problems, graph traversal, hidden Markov models (eg. speech recognition), and finite-state machines. We’d love to see SoC manufacturers or Open Source community come forward with APIs which developers, students, professionals and hobbyists can comprehend easily and actually improve performance of their applications.

We have no idea on the amount of Parallelism available on Android, but yes, in marketing and on paper, 4 does look AWESOME! 🙂

We were also to cover some of the blocks on OMAP. Here is the image again for the reference:

OMAP 44XX Block Diagram

Let’s talk a bit on IVA today.

IVA stands for Image and Video Accelerator and as expected it does lot:

  • 1080P video
  • Slow Motion Camcorder
  • Real-time Transcoding up-to 720p
  • Video Conferencing up-to 720p

Other interesting components are:

  • Video DMA Processor
  • Shared L2 interface and Memory
  • Motion Estimation Acceleration Engine
  • Entropy coder/decoder
  • and much more.. Check out the TI OMAP TRM for more information.

Why do we (hw designers, programmers and interested users) know about IVA? Because this is the part which will decide what all video formats we can play? As a normal user, max we know about a video is that it is mp4, avi, dvi etc. We let manufacturers claim (including us) that we can do 1080P and when we get the device we realize that there are multiple variants of these formats technically called, Constrained Baseline, Main, High, Simple, Advanced Simple Profile and many more. Not all formats are free, some have royalties (like MPEG) which in the end will add to the overall cost of the device. Also, not all formats are supported and but SoC vendors will share details on what all are enabled and what OEMs should work on. This time we know what all formats are already working and what we need to work on. We will share all profile support and not just 1080P since this is very important for the end usability, cos frankly user doens’t give a damn, his video should “just work” 🙂

I think it becoming a long post, so let’s stop here. Next time we will cover ISP and more.

Warm Regards

Rohan Shravan

The Infographs


Lately our designers have spent a bit of time on Info-graphs. Info-graphs are graphic visual representation of information, presenting complex information quickly and clearly. While our study is more focused on how we can create an automated process, where device can collect information and create these for a user, Dhyani created some art presenting his concepts. Check these below:

This is the email distribution spread over the countries. Idea was to figure out the location of senders for a user, and map it over the globe. P.S. this is just a concept, with no relation between the locations and email types. 

This is the graph of a user’s activity spread over a period of time. Activities are divided into categories, and each circle represents a task/app/use-case. 

This is an impressive ball-graph plot. 

This is a conceptual drawing of how content is distributed over the internet. 

And this one is how brands fared over CES. CNET, PCWorld and Gizmodo were covered here. 

This is a continuous activity which we are doing now to come to a point where we can define drawing primitives and rules based on which these info-graphs can be auto-generated. This is going to be a long but fun project at Notion Ink.

Dhyani is the brain behind this project. In the comments you can post requests for more info-graphs and case scenarios and we should be able to build more for you. Your requests will definitely help the project. 

Warm Regards

Rohan Shravan



There were many questions on the main blog on the switch from Tegra to OMAP. I thought we should clarify this.

So which is better, Tegra or OMAP? NVidia will say Tegra of course and TI will say OMAP. Would that mean we should go by the benchmarks? Or may be sheer specifications of both the SoC? Doesn’t OMAP’s memory bandwidth is more than Tegra 3 and Snapdragon? But Tegra 3 is Quad Core, and even GPU is updated? Then why iPad 2 beats Tegra 3 by miles on GLBenchmark? We had a lot of similar questions while we wanted to opt for one. If you followed Kernel developments you’d know that OMAP was definitely the next SoC supported by Google, so this decision had to be made on our end and fast.

Answer came from a very experienced veteran in the industry (one of our 3 mentors), who said, unless as an OEM you can’t get 100% out of these chips, all benchmarks, specifications and latest developments are useless. So the answer wasn’t based on which chip can beat the other one, but which one can we leverage to the highest possible extent. And in this regard OMAP definitely beats any SoC out there w.r.t. documentation, number of use-case modeled, white papers, reference documents and much more. Bangalore also hosts a lot of Ex-TI professionals who helped build OMAP, so answers are not tough to find.

Unlike last time where we banked on Tegra without possibly fully utilizing its power, this time our focus is to offer TI the best product based on OMAP. TI is a very respectable firm and I believe Adam II will be a marvel in their portfolio.

Learning time!

Check out this link on TI’s site for more information on OMAP.

Chip Block Diagram Source: TI High Resolution LINK

Few important things we should read from this diagram:

  • Dual-channel LDPPR 2 memory, which makes for easier, faster memory access and overall system efficiency (isn’t all your OS on RAM?)
  • This diagram mentions POWERVR™ SGX540, but OMAP 4470 has SGX544
  • TWL6030 and TWL6040 are companion support chips, heavily optimized for lowering the power consumption on OMAP
  • WiLink™ 7.0 is a mobile Wireless LAN chip. Single solution for WLAN, GPS, Bluetooth and FM. It supports Bluetooth 3.0 as well as Bluetooth Low Energy Profile which is core feature of Bluetooth 4.0. Link

Few other terms we should know since we will be using them in future:

  • Interfaces (in out context protocols which hardware peripherals follow to talk to each other)
    • I2C : one of the best 2-wire interface invented by Phillips. Used to be slow, but now supports up to 3.4 Mbit/s. OMAP has 4 of these. Nearly all your sensors and touch screens support I2C interface. Read more details here: LINK
    • CSI-2:  this is Camera Serial Interface. Check this LINK
    • SPI: this is a Serial Peripheral Interface named by Motorola. For high-speed short distance I2C would be best, and for more distance, less speed on data transmission, SPI should be used.
    • McBSP: Multichannel Buffered Serial Port, supports DMA, full-duplex data transfer, and lot of configurability. TI uses this in lot of their products. Looking at the diagram, you can guess the importance while communicating with WLAN or 3G/4G. More here
    • UART: Universal Asynchronous Receiver/transmitter, translates data between parallel and serial forms, mostly used for Debug USB, WLAN Module, NFC, etc.

That’s all for this week, next time we will cover Major Blocks (IVA, ISP and the mighty SGX) on the Chip.

Warm Regards

Rohan Shravan

Hello Everyone!

Hello Everyone!

Welcome to Designing Adam 2 WordPress Blog.

This is the Development Diary of Adam 2 so we can keep track of what we are doing and why we are doing. We will keep our focus on developments and this blog will go a little more into the technical details, so get your notebooks ready!

Since now we have 2 blogs, one on Adam I and other on Development of Adam II, we have dedicated information threads to follow issues and updates!

Times ahead are exciting!

Warm Regards

Rohan Shravan