|Organization||University of Minnesota, Twin Cities|
|Location||136 Monteith Research Center (MRC)|
|Start Date||May 8, 2017 10:15 AM|
|End Date||May 8, 2017 11:45 AM|
For many important and emerging applications, including the internet of things, smart sensors, health monitors, and wearable electronics, energy efficiency is of utmost importance. These applications rely on low-power microcontrollers and microprocessors that are already the most widely-used type of processor in production today and are projected to increase their market dominance in the near future. In the low-power embedded systems used by these applications, energy efficiency is the primary factor that determines critical system characteristics such as size, weight, cost, reliability, and lifetime. Although application-specific integrated circuits (ASICs) have higher energy efficiency, low-power general purpose processors (GPPs) are the preferred solution for many such applications, due to the evolving nature of these applications and the high costs of custom IC design. Unfortunately, conventional power reduction techniques for GPPs reduce power by sacrificing performance. As such, their impact is limited to the point where performance degradation becomes unacceptable. This talk describes novel approaches to application-specific power management that push the limits of power reduction for GPPs without reducing performance. These power management techniques are based on novel hardware-software co-analysis that can identify the maximal set of hardware resources that an application can use during execution, irrespective of application inputs. Any power that is expended by resources that an application can never use can be eliminated, bringing the power consumption of a GPP running the application closer to that of an ASIC. Since resources that an application does not use do not contribute to application performance, power is reduced with no performance cost. New opportunities for application-specific power management enabled by hardware-software co-analysis include application-specific timing analysis, power gating, peak power management, processor customization, and thermal management.
John Sartori received a B.S. degree in electrical engineering, computer science, and mathematics from the University of North Dakota, Grand Forks and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Illinois at Urbana-Champaign. He is currently an assistant professor of Electrical and Computer Engineering at the University of Minnesota, Twin Cities. His research interests include computer architecture, computer aided design, embedded systems, and algorithm development, especially focused on energy-efficient computing, high-performance computing, stochastic computing, and application-aware design and architecture methodologies. John's research has been recognized by best paper awards, an NSF CAREER award, and has been the subject of several keynote talks and invited plenary lectures. His work has been chosen to be the cover feature for popular media sources such as BBC News and HPCWire, and has also been covered extensively by scientific press outlets such as the IEEE Spectrum, IEEE Micro, and the Engineering and Technology Magazine. John is also passionate about teaching and has developed popular courses on scalable high-performance computing and the internet of things at the University of Minnesota. Outside of his academic endeavors, John enjoys outdoor activities in the balmy Minnesota weather, playing music, and studying and discussing philosophy.