The head background
The head background
  • Accelerated Chip Engine
    Accelerated Chip Engine
  • Highly Scalable and Versatile
    Highly Scalable and Versatile
  • 10ms Latency
    10ms Latency
  • 90% Theoretical Peak Performance
    90% Theoretical Peak Performance

Highly scalable Custom AI Streaming Accelerator (CAISA) architecture, compatible with different deep learning algorithms

Ultimate Computing Performance with High Energy Efficiency Ratio and Low Latency

Moving computing capabilities to edge devices and maximizing the use of hardware resources

CAISA
1

Underlying Parameterization

Underlying Parameterization
Multi-Layer Parallel Expandable
2

Multi-Layer Parallel Expandable

Data Parallelism

Filter Parallel

Channel Parallel

Layer Parallel

Accelerator Engine Parallel

Data Parallelism, Filter Parallel, Channel Parallel, Layer Parallel, Accelerator Engine Parallel

3

3D Convolutional Neural Network Architecture with Scalable Dimensions

Expanded Memory Architecture to Support High-Dimensional Data Parallelism

Extended Data Accumulation Unit to Support Parallel Computing Core Accumulation

Expanded Memory Architecture to Support High-Dimensional Data Parallelism, Extended Data Accumulation Unit to Support Parallel Computing Core Accumulation

3D Convolutional Neural Network Architecture with Scalable Dimensions

Chips Based on CAISA Architecture

  • Nebula Accelerator
  • Rainman Accelerator

Application Scenarios

CAISA Architecture offers high-performance computing for AIoT applications.

Image recognition
Semantic segmentation
Time series analysis
Speech semantics