Monday, March 11, 2013

I Can See You Now, in 10GbE - Machine Vision


A draft of the first half of this article as a part 1 was posted previously on 10Gbe.net. This weekend I revised the entire article, completed it, and have posted it below. 

Who would think that something as fundamental as a video camera would ever need a 10GbE connection? Last week the tech who ran a well known wind tunnel in the US contacted me to buy some 10GbE cards. I asked why, and he said because that's the connector on the new cameras he was having installed, cool. At that point the whole concept of Machine Vision Acceleration (MVA), and the importance it might play as an emerging 10GbE market became interesting. During my research for this piece I just learned that the Machine Vision (MV) market was $2B in 2011, and that the dominant standard is GigE Vision. GigE Vision was developed by a consortium of 12 companies and published in 2006, today over 50 companies produce GigE Vision products.

Automated manufacturing lines rely on cameras to capture images for analysis, and keep the lines moving. These digital images are then run through a number of recognition algorithms to determine if the products pass or fail based on certain quality standards: color, shape, orientation, etc… They can also be used to guide industrial robots. Technological advances since the development of GigE Vision have further accelerated adoption within this new market. Today we have 2 megapixel cameras that are capturing 340+ images/second, and actually doing some image pre-processing within the camera. These cameras have a built in FPGA (Field Programable Gate Array logic chip) and significant memory. This approach enables one to write algorithms to pre-process the images, then load these algorithms into the camera to execute. This advanced processing trims down the data actually transmitted off the camera, while also distributing the processing load across all the cameras. It is not uncommon for a single server to process images from multiple cameras so if each camera can handle some basic takes this can dramatically ease the load on the server. Even with this assistance these cameras still require a built in 10Gb Ethernet interface for connection back to the servers that are processing the images in real time in order to properly direct the assembly line, robots, or security screeners.

Recently we covered HD video streaming. These machine vision cameras capture 2MP at 352 fps (Frames per Second). This is equivalent to between 6 and 12 uncompressed HD 1080p video streams (HD video can be between 24 and 60fps).One example I found of H.264 (the most widely used format) compression of a 1080p stream (1920x1080) requires 3Mbps at 0.4fps. So if we scaled this up to 352 fps this would work out to 2.6Gbps. Now you can see why these cameras have a built in 10GbE interface. The latest cameras are labeled GigE-Vision enabled devices, the new emerging standard. 

What networking magic could Myricom bring to MVA to improve performance? This is another market where bypassing the OS on the server side can make a dramatic improvement in overall solution performance. The processor on Myricom's network adapter that resides in the server can detect a GigE Vision packet as it enters the server and directly place that image into the frame buffer of the GigE Vision enabled application running on the server. Normally without MVA it would take the OS on the server two copies, and all the associated CPU cycles, in order to pass the data to the application.  MVA reduces this to zero copies, and no host CPU involvement. The second bit of magic that MVA brings to the table is intelligent interrupt moderation. With 10GbE running at line rate interrupt processing can seriously impact server performance. MVA utilizes adaptive interrupt moderation, known as coalescing, to reduce the impact on the server. With this technique MVA only notifies the host CPU when the image is complete and waiting in the user space frame buffer.

The above two reasons, zero memory copies & intelligent interrupt moderation are MVA's opening and closing illusions, but every good magician has several other tricks to fill out their act. Let's face it flowers from a wand still get's a laugh if properly timed. Anther bit of slight of hand is how MVA manages load balancing and traffic steering. Normally the standard ethernet or OS driver handles steering ethernet traffic steering to the application, MVA handles this now. MVA also supports multiple threads and devices and the steering between them. 

So all this magic, what's the real payback, that Ta Da statement? How about a 25% CPU reduction in host CPU! That's what all the above magic produces on an Intel 8-core 2.93Ghz system. Thanks to our Hawaiian surfing architect responsible for MVA for many of the details above.  

Today Myricom's FastStack MVA is available as a technology preview. We're still learning about this market, and tightening up what we think will eventually make for an awesome product in this space. So the next time you're launching a Machine Vision project please consider taking a look at the technology preview release of Myricom's FastStack MVA. 

1 comment:

  1. Very cool. Nice to see concepts like OS bypass and interupt coalescing applied to more common things like video capture. QA via video capture certainly is a growing technology. Looking forward to the new software!

    ReplyDelete