DEEP ON SILICON
Where algorithms meet chip architectures
Oh no, no… we don’t do chip layout or touch nanoscale transistors. Our partners and clients know how to go there.
Our goal is to figure out what would limit the implementation of a particular AI algorithm: computation, communication, or (power) consumption?
We think of ourselves as architecture-friendly algorithm designers, highly tuned to the challenges of realtime execution of AI algorithms. Some of us were around when the field of fractal compression went nowhere because it’d take a couple of hours to process a couple of minutes of video.
We work at the highest abstraction level to find the right architecture for the right algorithm. Our job is a balancing act. It’s codesign.
Codesign is about how to develop computational algorithms for specific processor architectures and vice versa. Our goal is to end the isolation between the design of algorithms and architectures. And our results are useful as blueprints for processor architects and chip developers.
The abstraction level we put to use is an algorithmic framework for understanding the functioning of AI algorithms on emerging processors. This framework aims to quantify how large-scale changes to architectures might affect the performance of an AI algorithm in terms of accuracy, latency, scalability, chip size, and power consumption. And in reverse, given an architecture, figure out what kind of AI algorithms might be the best match.
As algorithm designers, the panoply of architectural issues we address include: the number of cores per processor; cycle-frequency of each core; aggregate on-chip cache capacity; memory bandwidth; on-chip network bandwidth; and off-chip network link bandwidth.