The Next Wave of AI Applications Requires Edge Computing
Currently, 95% of AI workloads operate in central data centers, with only 5% at the edge. By 2028, this distribution will equalize, with 50% of AI tasks predicted to occur at the edge. This shift reflects the increasing affordability of Edge AI technologies, along with new possibilities for real-time AI use cases due to low latency processing.
While cloud-based deployments are effective for large-scale centralized processing, they fall short in meeting the unique demands of many edge applications. A primary reason is higher latency compared to edge computing, which makes it challenging to support workloads that require real-time processing. Additionally, cloud solutions pose data privacy and regulatory issues because they transmit and store data on remote servers, which can increase the risk of data breaches and make compliance with strict data protection regulations more challenging. Low power consumption is also crucial for edge applications, such as autonomous vehicles and remote video security systems, which need continuous operation and cannot afford frequent battery replacements. Efficient power usage extends device life and ensures reliability in challenging environments.
Discrete GPU cards, which have traditionally been used for AI workloads due to high computational performance, are suboptimal for many edge applications due to large form factor, cost, and high-power requirements. Edge AI deployments often occur in diverse and challenging environments and need a special class of hardware that is compact, efficient, and specifically designed for AI workloads.
The Combination of Intel CPUs and Hailo NPUs Enables Low-cost Edge AI Processing that Outperforms GPUs Per Dollar and Per Watt
The rise of purpose-built AI accelerators reflects a shift towards specialized hardware optimized for AI workloads. These accelerators offer low power consumption compared to discrete GPU cards, making them ideal for deployment in devices at the edge. Hailo has developed top performing AI accelerators that are purpose-built for edge computing.
Hailo’s NPU AI accelerator offers exceptional performance and energy efficiency in a variety of form factors, making them suitable for a diverse range of applications. Engineered for durability and only requiring passive cooling, Hailo’s NPU operates reliably in tough industrial conditions, accommodating varying temperatures and physical demands.
Hailo’s NPU works as a co-processor with Intel CPUs. In this arrangement, the Intel CPU handles general-purpose computing tasks, orchestrates system operations, and manages overall application logic, while delegating compute-intensive AI-specific tasks to the Hailo NPU. This combination enables highly efficient compute at a fraction of the cost of a discrete GPU. Irrespective of the Intel CPU utilized as the system host, in this co-processor arrangement, Hailo’s NPU outperforms leading discrete GPU cards in AI performance, with Hailo-8 M.2 & PCIe cards boasting up to 2x better cost efficiency (FPS/$) and 5x performance per watt (FPS/W) compared to NVIDIA GPU PCIe cards.
Better Cost Efficiency (FPS/$)
Performance Per Watt (FPS/W)
AI at the Edge is Unlocking Possibilities Across Industries
Safety + Security
Industrial Automation
Healthcare + Medical
Automotive
Computing
Retail
Agriculture
Plug and Play AI Solutions from an Ecosystem of Industry Leaders
Velasea and Intel are driving technology transformation in the industry by collaborating with Hailo to build a robust ecosystem that enables video management systems (VMS) vendors to easily integrate AI into their edge solutions. Recently, the two companies orchestrated the development of a range of end-to-end solutions for AI-enabled video management systems, including hardware and software integration, in cooperation with several partners.
System Hardware
CPU
NPU
Video Analytics
How it Works
CVEDIA software seamlessly integrates with Hailo and Intel hardware to meet customers’ specific application needs. For its AI models, including object detection and segmentation, CVEDIA leverages OpenVINO toolkit to optimize performance across available system hardware, according to customer requirements. With CVEDIA’s software, customers gain rich insights and unlock real-time decision-making — empowering businesses to improve overall efficiency and productivity across operational settings.
As an OEM appliance builder and value-added reseller, Velasea customizes white-label solutions for customers across industries. Through partnership with Hailo, Intel, and CVEDIA, Velasea helps users deploy new systems and even upgrade existing legacy systems with modernized cutting-edge AI capabilities. “The Hailo-8 AI accelerator combines VMS and AI functionality on the same Network Video Recorder (NVR). This significantly reduces hardware costs — by up to 70% — by eliminating the need for a dedicated GPU server, as in traditional deployments. I believe this streamlined design will accelerate the adoption of AI in the security market by making AI solutions more accessible,” says Tom Larson, President of Velasea.
This whitepaper by Intel Corporation originally appeared on intel.com