In the consumer IoT camera market, there is an ongoing trend to wider angles, fisheye and even 360 degree fields-of-view with ever higher resolutions. Figure 1 shows a small sample of IoT cameras in the market including home security (Withings and Dlink), baby cams (Philips Advent), 360 cameras (Zmer One) and Sports Cameras (GoPro). The critical needs are excellent video compression to reduce bandwidth, low power, and the ability to have multiple views from one video stream.
Automotive CVP Markets:
The automotive viewing camera market includes cameras for back-up, rear and side-view mirror replacement, surround view systems (usually four cameras) and driver monitoring. These applications require high quality High Dynamic Range (HDR) image processing along with geometric processing to support wide-angle and multiple-view displays. In addition, since auto CVP’since auto CVPns. -add requirge (HDR) image processiFigure 2 illustrates the various viewing cameras that will increasingly be used in auto safety systems. It is critical to realize that, for safety reasons, the video viewed by the driver must be “ is critical to realize that, for safety reasons, the video vieat high speed, it becomes an absolutely essential requirement!
Critical Technologies for the Auto CVP Markets:
Geometric Processing: Geometric processing is a means of mathematically transforming an image in both position space (X, Y) and color space (R, G, B). A GPU or other large processor can perform these functions fast enough for low-speed applications, but not for multiple cameras at highway speed. For high-speed applications, one needs sub-frame latency engines such as GEO’s eWarp technology, Figure 4, provides a scalable real-time (sub-frame latency) geometric processing engine that is precise, extremely flexible and ultra-efficient in terms of power and size. The transformations may range from simple scaling and rotation to highly non-linear transformations with thousands of parameters. It may also be desirable to enable multiple output video streams (derived from a single wide-view video) to be displayed in multiple windows with each window having a unique transformation, all of which are processed with sub-frame latency.
Another highly desirable attribute is to implement automatic digital calibration and alignment of cameras on the factory floor, thereby further reducing costs.
Video Compression: Video compression is required only in CVP applications where video must be captured and stored, such as IoT. Live viewing auto cameras do not require video compression capabilities. However, down the road, capturing ongoing video would help identify the guilty party in an accident, which may result in reductions in insurance premiums.
Sensor Image Signal Processing: To turn image sensor data into pixels, an Image Signal Processor (ISP) is needed to provide advanced image processing to support things like automatic white balance (AWB), exposure control (AE), advanced adaptive noise reduction and sharpening, and most importantly, High Dynamic Range (HDR) to ensure that very bright and very dark areas of scenes are processed to display the most information achievable. It should be noted that auto image sensors have much larger pixels, thus most auto cameras have 1K resolutions or less, which will grow to 2 megapixels in newer cameras going forward, see Figure 6.
Vehicle Mirror Replacement with Ultra-Low Latency Cameras:
Replacing mirrors with cameras will have many economic and safety benefits. Let’s first look at side mirrors – these motorized bulky systems may result in up to six percent reduction in fuel efficiency at highway speeds, especially for larger vehicles. They are expensive to build and replace, are easily damaged and still have blind spots, requiring drivers to turn their heads before changing lanes. A mirror replacement camera, with an associated display, solves both problems at a fraction of the cost.
In the case of rear-view mirrors, the key problems solved include when the rear window view is limited or blocked, or as in some vehicles, there is no rear window at all. The rear-view camera mirror replacement can give the driver a better view of everything behind the vehicle, including adjacent lanes.
It is essential that cameras replacing mirrors provide real-time video with minimal latency. The view in a glass mirror is instantaneous since the image is delivered at the speed of light. At 70 mph, a car travels at ~0.13 foot per millisecond. So latency must be well below a single frame at 60 FPS. The use of powerful software-based ECU’s or GPU’s would result in latencies of up to 200 milliseconds resulting in objects being displaced by 20 feet or more, which is unsafe and unacceptable. Figure 7 illustrates some of the various views possible from a single wide-angle video stream, with sub-frame latency and different transformations in each window.
Computer Vision and Artificial Intelligence:
On the road to autonomous vehicles it is difficult to imagine a sensor on the car that does not benefit from machine intelligent processing. With the significant number of sensors and the high bandwidth of data involved, especially in the case of cameras, it is understood that the processing will need to move from the centrally processed approach to an architecture which supports processing on a sensor by sensor basis, also known as, on the edge.
Increasingly, this means addressing object detection in the camera video processor to support ‘Smart Cameras’ for applications such as object aware back up cameras, mirror replacement blind spot support, and automatic camera calibration and alignment of surround view systems. Another upcoming application is the use of cameras for monitoring drivers to warn them if they are drowsy or not paying attention, this camera monitoring system is also a key system component for semi-autonomous vehicles to insure the driver is paying attention when the system is reverting control back to the driver.
Automotive Head-Up Display (HUD):
With all of the new features being added to vehicles today, the Human Machine Interface (HMI) within the cockpit has become a key area of focus in the automotive industry. One key technology that has shown promise is theHUD. Some of the key challenges to HUD’s that project the video through the windshield involve the quality and curvature of the windshield surface which the HUD is required to use for projection.
Real-time geometric processing is essential to improving the image quality by automatically correcting the optical distortions. This not only improves the image quality but enables the use of lower cost windshields and HUD mirrors. Looking even further out, the HMI experts are looking at augmented reality applications with the HUD, where objects on the road are highlighted with graphics real-time, these applications require the distortion correction to also be applied real-time.
HUD’s will become more popular as costs come down since they allow drivers to keep their eyes on the road rather than looking down on a display. Next generation HUD’s that are in development will contain a lot of information, transforming the driver seat into an automotive cockpit. See Figure 8.
Auto Safety System Architectures:
There are a lot of discussions underway by various entities as to what will be the ultimate architecture of automotive safety camera systems. Some advocate using a super-computer (ECU) under the hood to handle everything. Perhaps a better approach is to have one or two powerful processors under the hood – one for safety functions and one for infotainment, while the various sub-systems (cameras, displays) having their own “smarts” built-in. This would enable them to send pre-processed video to the ECU’s thus simplifying their tasks and greatly reducing latency. The advantages of having a processor for each camera include the elimination of a single point of failure to the broader system, reducing system latency with local processing, increased flexibility for optioning different classes of vehicles with different numbers of cameras, and insuring that the image and geometric processing are matched to the individual image sensor and lens during manufacture.
The Semiconductor Sales Process in the Auto Industry:
The auto industry supply chain is such that the auto OEM’s (Toyota, VW, GM, etc.) source their camera modules from Tier-1 suppliers (Panasonic, Sony, Magna, Hyundai Mobis, Bosch, etc.). A chip supplier needs to market their devices to both OEM’s (to show the device unique benefits) as well as the Tier-1s, which are the ultimate IC customer. Once the Tier-1’s have evaluated the IC product, built camera prototype they then present them to OEM’s who, in turn, spent considerable time evaluating and qualifying the product. Upon a successful evaluation, the cameras are assigned to a particular model and scheduled for production – typically starting 18 months or longer into the future from that decision. From developing a new chip to ramping volume production can take to 2-4 years, but there are some silver linings. Typical auto platforms are in production for 4-5 years with accurate and dependable volume forecasts. In addition, with the amount of effort that was required to qualify a new camera and once a Tier-1 has the camera in production by an OEM, it typically becomes much easier to roll the same camera quickly across multiple new OEM product lines.
The auto camera IC market is a great market – once you get there! And we believe GEO is there!