Home
Session 3 - Image & Video Processing
Time: Tuesday, 2019-04-09, 15:30PM - 17:00PM
Room: Wilhem-Köhler-Saal, S1|03/283
Session chair: Roger Woods
HiFlipVX: an Open Source High-Level Synthesis FPGA Library for Image Processing
Lester Kalms, Ariel Podlubne, Diana Göhringer
The field of computer vision has been increasing over the past years as it is applied to many different applications nowadays. Ad- ditionally, they have become more complex and power demanding. On one hand, standards and libraries such as OpenCV and OpenVX have been proposed to ease development. On the other hand, FPGAs have proven to be energy efficient on image processing. The tendency over the last years has turned into using High-Level Synthesis (HLS), to ease their programmability. We present a highly optimized, parametrizable and streaming capable HLS open-source library for FPGAs called Hi- FlipVX. Due to its structure, it is straightforward to use and simple to add new functions. Furthermore, it is easily portable as it is based on the OpenVX standard. HiFlipVX also adds different features such as auto- vectorization. The library achieves an efficient resource utilization and a significant scalability, also in comparison to the reference (xfOpenCV), as shown in the evaluation.
Real-time FPGA implementation of connected component labelling for a 4K video stream
Piotr Ciarach, Marcin Kowalczyk, Dominika Przewlocka, Tomasz Kryjak
We present a hardware implementation in reconfigurable logic of a single-pass connected component labelling (CCL) and connected component analysis (CCA) module. The design supports a video stream in 4 pixel per clock format (4 ppc) and allows real-time processing of 4K/UHD video stream (3840 x 2160 pixels) at 60 frames per second. We discuss the applied modification and simplifications and their impact on the algorithm’s performance. We verified the proposed module in an ex- emplary application – skin colour areas segmentation – on the ZCU 102 evaluation board with Xilinx Zynq UltraScale+ MPSoC device.
A Scalable FPGA-based Architecture for Depth Estimation in SLAM
Konstantinos Boikos, Christos-Savvas Bouganis
The current state of the art of Simultaneous Localisation and Mapping, or SLAM, on low power embedded systems is about sparse localisation and mapping with low resolution results in the name of effi- ciency. Meanwhile, research in this field has provided many advances for information rich processing and semantic understanding, combined with high computational requirements for real-time processing. This work pro- vides a solution to bridging this gap, in the form of a scalable SLAM- specific architecture for depth estimation for direct semi-dense SLAM. Targeting an off-the-shelf FPGA-SoC this accelerator architecture achieves a rate of more than 60 mapped frames/sec at a resolution of 640x480 achieving performance on par to a highly-optimised parallel implemen- tation on a high-end desktop CPU with an order of magnitude improved power consumption. Furthermore, the developed architecture is com- bined with our previous work for the task of tracking, to form the first complete accelerator for semi-dense SLAM on FPGAs, establishing the state of the art in the area of embedded low-power systems.