列表在开发CIM/PIM 架构的公司
Compute-in-Memory (CIM) and Processing-in-Memory (PIM) are widely seen as the true architectural leap beyond GPUs, because they attack the single biggest bottleneck in AI:
moving data from memory to compute.
Below is a clear, executive-level breakdown of what CIM/PIM really are, why they matter, and where the technology stands today.
CIM — Compute Inside Memory Cells
Compute is done within the memory array itself (often analog).
Example:
-
SRAM/DRAM/Flash cell performs a partial MAC operation
-
Use Ohm’s law + Kirchhoff’s law to perform vector-matrix operations
This is extremely energy-efficient and parallel.
CIM = full fusion of compute + memory.
PIM — Compute Near Memory
Compute is done next to memory using small accelerator blocks.
Examples:
-
DRAM with integrated ALUs
-
HBM stack with logic layer (HBM-PIM)
-
Samsung’s AXDIMM, HBM-PIM
-
Near-memory FPGA tiles
PIM = memory with local compute to reduce data movement.
Commercial / Near-Commercial Leaders
Samsung
-
AXDIMM PIM DDR4/5
-
HBM-PIM stacked DRAM with logic layer
Best positioned for mainstream adoption.
SK Hynix
-
HBM3E PIM prototypes
-
Dataflow-style logic layers
Intel / Micron
-
Exploring NDP (near-data processing)
-
Not as far along as Samsung.
Startups (Most Innovative)
Mythic AI
-
Analog compute-in-memory (flash-based) for edge inference
-
Excellent efficiency, accuracy challenges
Rain AI
-
Next-gen analog CIM tile arrays
-
Model weights stored directly in analog arrays
-
Very promising for low-power LLM inference
MemryX
-
Near-memory compute for edge AI
-
Simplified dataflow architecture
Cerebras (partially CIM-like)
-
Wafer-scale engine with distributed local memory
-
Not CIM, but memory-centric and post-GPU
更多我的博客文章>>>
