![]() ![]() Since accelerators are designed for different applications (e.g., sparse DNNs vs. Since different mappings result in widely varying performance and energy efficiency and different workloads have different optimal mappings, finding optimal mappings is essential for evaluating a DNN architecture.Īccurately predict energy for a range of accelerator designs. Unlike traditional architectures that have an ISA that allows a workload to be represented with a single compiled program, each DNN accelerator uniquely exposes many configurable hardware settings and requires the designer to find a way for scheduling operations and moving data for each workload, i.e., find a mapping for each workload. Since being able to describe the architecture is the initial step for any architecture evaluations, it is important for the infrastructure to be able to have the flexibility to describe a wide range of DNN architecture designs.įind optimal mappings for a wide range of workloads onto the architecture. Therefore, the traditional way of using a fixed set of architecture components to describe the design becomes infeasible for describing DNN accelerators. Unlike traditional architectures that have similar architectures but various microarchitectures, DNN accelerators’ architectures vary significantly from one to another. ![]() In order to systematically evaluate arbitrary DNN accelerator designs, we need to have an infrastructure that is able to:įlexibly describe a wide range of architectures. To provide high performance and energy efficiency to this class of computation and memory-intensive applications, many DNN accelerators have been proposed in recent years. Deep neural networks have emerged as the key approach for solving a wide range of complex problems.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |