This post is on VEE, the Visual Enhancement Engine for Image/Video Processing.

Displays are ubiquitous and we use them everywhere, under bright sun-light, varying ambient light and in no lighting condition.

To specifically target a part of this issue, viewing in sun-light, trans-flective screens were adopted for the first generation. What about the other lighting conditions? The PQ screens were very usable when you are outside, but once indoors, in little high, normal or low lighting condition, everyone felt bit a compromised on the color saturation. The more you use the devices, more you realize that the time you spend in direct sun-light is not the major use case, and then this little compromise suddenly becomes uncomfortable! All you are now left with is a near mono-chrome display in direct sun-light and washed out color feel in ambient lights. Good solution, but not good enough for how tablets are being used as of now.

Other solution to all the above problems was to record the ambient light using ambient light sensor, and bump up the backlight brightness or power display. Unfortunately doing so increases power consumption significantly, diminishing battery life.

To resolve specially this issue, Adam II comes with a Video Enhancement Engine on-board.

This engine delivers television-quality visual experience by adapting display data, in real-time, to imrpove the ability to view videos under low backlight or in bright ambient light conditions. It enhances the image and video quality by compressing the dynamic range to match the characteristics of the display, resulting in a better viewing experience.

The system is based on the Orthogonal Retina-Morphic Image Transform (ORMIT) algorithm. It is a sophisticated method of dynamic range compression which differs from conventional methods such as gamma correction in that it applies different tonal and color transformations to every pixel in an image. These algorithms implement a model of human perception, which results in a displayed image that retains details, color and vitality even under different viewing conditions. ORMIT was developed as a result into biological visual systems, with particular emphasis on the humans.

Simply put, the display will be tuned in most lighting condition for different sort of images, and right set of parameters are acquired for this algorithm. Now when the devices on field, experiences a certain lighting conditions type and an image/video, it can quickly change the image properties and increase visual quality.

Here is how it works:

OMAP reads data from the memory, it can be an image, or video. The Ambient Light Sensor sends measured light values to OMAP (so it can control VDO to be covered in the next blog) and VEE. OMAP has DSI out, which should be converted to LVDS signals so the display can read data. Instead on Adam II, this DSI out is sent to the VEE which does it visual enhancement real time and outs the data as LVDS which the screen can now read. VEE also takes the ambient light signal values as one of its parameters. You can see on the extreme right how a display might look when VEE is off and when it is on. This is a photoshopped image and doesn’t do justice to the actual performance. Once ready, I will share videos on this, comparing the best know devices around. What is missing in the picture picture above is the DPO.

This Visual Enhancement Engine comes on board with a Display Power Optimizer (DPO), both of which are a part of a single brilliant package developed by one of our partners (who also holds the trademarks for VEE and DPO). In the next blog we will introduce our partner and what exactly DPO does, and most importantly what it means for the overall power optimizations and visual experience on Adam II.

Warm Regards

Rohan Shravan

Advertisements