CNN Inference and Training with Orders-of-Magnitude Less Energy Costs: Some (Almost) Free Lunch to Enjoy
Convolutional neural networks (CNNs) have been increasingly deployed to resource-constrained edge devices, and many efforts have been made towards efficient CNN inference. While recognizing the importance of efficient inference, we are meanwhile curious about one question beyond: how to enable more energy-efficient, on-device training of CNNs? This talk will introduce our recent efforts to address this question. First, based on our past works on efficient CNN inference, we propose to reduce the energy cost of the CNN training algorithms, by dropping unnecessary computations from various aspects (data-level, layer-level, and bit-level). We achieve aggressive energy savings of 80% – 90% with negligible accuracy loss, when training state-of-the-art ResNets on CIFAR datasets from scratch, as well as when fine-tuning/adapting pre-trained models. Next, from a hardware-algorithm co-design viewpoint, we propose to trade higher-cost memory storage/access for lower-cost computation, that is applicable for improving the energy efficiency of both training and inference. The goal is achieved by enforcing a special weight re-parameterization by structured matrix decomposition, and is shown to further boost energy efficiency as measured on real devices.
Prof. Zhangyang Wang
Assistant Professor, Texas A&M University on November 22, 2019 at 11:45 AM in EB2 1230.
Dr. Zhangyang (Atlas) Wang is an Assistant Professor of Computer Science and Engineering at Texas A&M University, since 2017. During 2012-2016, he was a Ph.D. student in the Electrical and Computer Engineering (ECE) Department, at the University of Illinois at Urbana-Champaign (UIUC), working with Professor Thomas S. Huang. He was a former research intern with Microsoft Research (2015), Adobe Research (2014), and US Army Research Lab (2013). Dr. Wang is broadly interested in the fields of machine learning, computer vision, optimization and their interdisciplinary applications. His latest interests focus on addressing the automation, robustness, efficiency, and privacy issues of deep learning. He has published over 80 papers (NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, etc.), 2 books and 1 chapter; has been granted 3 patents; and has received over 20 research awards and scholarships (including winning challenges in ICCV’19, CVPR’18 and ECCV’18). Dr. Wang regularly serves as tutorial speakers, workshop organizers, guest editors, area chairs, session chairs, and TPC members at leading conferences and journals. His research has been extensively supported by federal, industrial and university grants. More can be found at https://www.atlaswang.com/
The Department of Electrical and Computer Engineering hosts a regularly scheduled seminar series with preeminent and leading reseachers in the US and the world, to help promote North Carolina as a center of innovation and knowledge and to ensure safeguarding its place of leading research.