As we see powerful FPGAs and multicore CPUs proliferate, a vision system designer will need to understand the trade-offs and benefits of using these different types of processing elements. Machine vision has been around for quite a while and is utilized in industrial automation systems. This will allow the system to improve its production quality and throughput. It does this by eliminating manual inspection that was previously conducted by humans.
All of us have been witnesses to the mass proliferation of camera use in our daily lives in mobile devices, computers, and automobiles. However, the primary advancement with machine vision is the processing power. We now see processor performance increasing twofold every two years. There is also a continued emphasis on various parallel processing technologies such as FPGAs and CPUs. A vision designer is now able to utilize highly sophisticated algorithms to create more intelligent systems and visualize data.
The increase in performance simply means that a designer can process higher data throughput so as to conduct faster image acquisition, take advantage of the most recent cameras, and use higher resolution sensors. This increase in performance will help a designer to both quickly acquire images and process them faster.
When processing an algorithm, such as filtering and thresholding or processing an algorithm such as pattern matching, a designer will be able to execute these functions much more quickly. This basically means that a designer will now have the ability to make a decision that is based on visual data quicker than ever.
As we see an increase in vision systems that utilize the latest generations of powerful FPGAs and multicore CPUs reach the market, a vision system designer will need to understand the trade-offs and benefits of using each of these processing elements. They will need to know the best architecture to use for the foundation of their design and the right algorithm to use for the right target.
Before starting an investigation into which type of algorithm is best suited for each of the processing elements, it would be advisable to understand the different types of architectures that are better suited for each application. When a designer develops a vision system that is based on the heterogeneous architecture of a FPGA and a CPU, the designer will need to think about the 2 main use cases which include in-line and co-processing.
When using FPGA co-processing, the CPU and FPGA will work together so as to share the entire processing load. You will see this type of architecture being commonly used with USB3 Vision and GigE Vision cameras. The reason for this is because the CPU will be a better choice for implementing their acquisition logic.
A designer will be able to use the CPU to acquire an image and then send it directly to the FPGA by means of DMA (direct memory access). This will allow the FPGA to perform various operations such as color plane extraction or filtering. The image can now be sent back to the CPU for any other further advanced operations such as pattern matching or OCR (optical character recognition). There are cases where you can implement every one of the processing steps on the FPGA and then send only those processed results back to the CPU. By doing this, it will allow the CPU to focus on other operations such as network communication, motion control, and image display.
When using an in-line FPGA processing architecture, a designer must connect a camera interface directly into the pins of the FPGA. This is important so that the pixels will be passed directly from the camera to the FPGA. This type of architecture is normally used with Camera Link cameras. The reason for this is because of the camera’s acquisition logic being easily implemented while using the digital circuitry of the FPGA.
This particular type of architecture has 2 main benefits. The first benefit is that similar to co-processing. You now can use the in-line processing to transfer some of the work to the FPGA from the CPU by utilizing pre-processing functions that are on the FPGA. For example, you will be able to utilize the FPGA for various high-speed processing functions including thresholding and filtering.
This will be extremely beneficial because it will reduce the amount of data that a CPU must process. The reason for this is because it will implement logic that will only capture any pixels from the region of interest. This will increase the overall system throughput. The second benefit of this type of architecture is in regards to the high speed control operations.
The FPGA will now be able to operate directly without using the CPU. FPGAs from Directics.com are perfect for control applications due to the fact that they are able to run extremely fast loop rates that are highly deterministic. A good example of this is high-speed sorting. High-speed sorting is accomplished when the FPGA is able to send a pulse to an actuator that has the ability to sort or reject parts as they pass by.
It is true that when using a type of architecture that features both CPU and FPGA it will offer you the best of both worlds. It will also provide you with a competitive advantage in terms of reliability, cost, and performance. There is one big challenge unfortunately. The main challenge when implementing any FPGA vision system is to overcome the programming complexities of FPGAs. A vision algorithm development has the nature of being an interative process. A designer will know at the beginning if they will need to try various approaches with any task.
The majority of the time, the key is not to discover which approach works but rather which approach works the best. The challenge is that the best may be different from one application to another. For example, with some applications accuracy is paramount while with others it is speed. A good designer will need to discover the best approach for any specific application by trying a few different approaches and not giving up.