A compute-in-memory neural-network inference accelerator based on resistive random-access memory simultaneously improves energy efficiency, flexibility and accuracy compared with existing hardware by co-optimizing across all hierarchies of the design.
- Weier Wan
- Rajkumar Kubendran
- Gert Cauwenberghs