报告题目1：A Cross-Layer Perspective for Energy Efficient Processing---From beyond-CMOS devices to deep learning
报告人：Prof. X. Sharon Hu
报告人单位：University of Notre Dame, USA
X. Sharon Hu is a professor in the department of Computer Science and Engineering at the University of Notre Dame, USA. Her research interests include low-power system design, circuit and architecture design with emerging technologies, hardware/software co-design and real-time embedded systems. She has published more than 300 papers in these areas. Some of her recognitions include the Best Paper Award from the Design Automation Conference and from the International Symposium on Low Power Electronics and Design, and the NSF CAREER award. She has participated in several large industry and government sponsored center-level projects and is a theme leader in an NSF/SRC E2CDA project. She is the General Chair of Design Automation Conference in 2018 and was the TPC chair of DAC in 2015. She also served as Associate Editor for IEEE Transactions on VLSI, ACM Transactions on Design Automation of Electronic Systems, etc. and is an Associate Editor of ACM Transactions on Cyber-Physical Systems. X. Sharon Hu is a Fellow of the IEEE.
As Moore’s Law based device scaling and accompanying performance scaling trends are slowing down, there is increasing interest in new technologies and computational models for fast and more energy-efficient information processing. Meanwhile, there is growing evidence that, with respect to traditional Boolean circuits and von Neumann processors, it will be challenging for beyond-CMOS devices to compete with the CMOS technology. Exploiting unique characteristics of emerging devices, especially in the context of alternative circuit and architectural paradigms, has the potential to offer orders of magnitude improvement in terms of power, performance and capability. To take full advantage of beyond-CMOS devices, cross-layer efforts spanning from devices to circuits to architectures to algorithms are indispensable.
This talk will examine energy-efficient neural network accelerators for embedded applications in this context. Several deep neural network accelerator designs based on cross-layer efforts spanning from alternative device technologies, circuit styles and architectures will be highlighted. Application-level benchmarking studies will be presented. The discussions will demonstrate that cross-layer efforts indeed can lead to orders of magnitude gain towards achieving extreme scale energy-efficient processing.
报告题目2：Intelligent Computing, Big Data, and Modern Medicine and Healthcare
报告人：Prof. Danny Ziyi Chen
报告人单位：University of Notre Dame, USA
Dr. Danny Ziyi Chen (陈子仪) entered and studied at Wuhan University in 1978. He received the B.S. degrees in Computer Science and in Mathematics from the University of San Francisco, California, USA in 1985, and the M.S. and Ph.D. degrees in Computer Science from Purdue University, West Lafayette, Indiana, USA in 1988 and 1992, respectively. He has been on the faculty of the Department of Computer Science and Engineering, the University of Notre Dame, Indiana, USA since 1992, and is currently a Professor with tenure. Dr. Chen's main research interests are in computational biomedicine, biomedical imaging, computational geometry, algorithms and data structures, machine learning, data mining, and VLSI. He has published over 130 journal papers and over 210 peer-reviewed conference papers in these areas, and holds 5 US patents for technology development in computer science and engineering and biomedical applications. He received the CAREER Award of the US National Science Foundation (NSF) in 1996, a Laureate Award in the 2011 Computerworld Honors Program for developing “Arc-Modulated Radiation Therapy” (a new radiation cancer treatment approach), and the 2017 PNAS Cozzarelli Prize of the US National Academy of Sciences. He is a Fellow of IEEE and a Distinguished Scientist of ACM.
Computer technology plays a crucial role in modern medicine, healthcare, and life sciences, especially in medical imaging, human genome study, clinical diagnosis and prognosis, treatment planning and optimization, treatment response evaluation and monitoring, and medical data management and analysis. As computer technology rapidly evolves, computer science solutions will inevitably become an integral part of modern medicine and healthcare. Computational research and applications on modeling, formulating, solving, and analyzing core problems in medicine and healthcare are not only critical, but are actually indispensable!
Recently emerging deep learning (DL) techniques have achieved remarkably high quality results for many computer vision tasks, such as image classification, object detection, and semantic segmentation, largely outperforming traditional image processing methods. In this talk, we first discuss some development trends in the area of intelligent medicine and healthcare. We then present new approaches based on DL techniques for solving a set of medical imaging problems, such as segmentation and analysis of glial cells, analysis of the relations between glial cells and brain tumors, segmentation of neuron cells, and new training strategies for deep learning using sparsely annotated medical image data. We develop new deep learning models, based on fully convolutional networks (FCN), recurrent neural networks (RNN), and active learning, to effectively tackle the target medical imaging problems. For example, we combine FCN and RNN for 3D biomedical image segmentation; we propose a new complete bipartite network model for neuron cell segmentation. Further, we show that simply applying DL techniques alone is often insufficient to solve medical imaging problems. Hence, we construct other new methods to complement and work with DL techniques. For example, we devise a new cell cutting method based on k-terminal cut in geometric graphs, which complements the voxel-level segmentation of FCN to produce object-level segmentation of 3D glial cells. We show how to combine a set of FCNs with an approximation algorithm for the maximum k-set cover problem to form a new training strategy that takes significantly less annotation data. A key point we make is that DL is often used as one main step in our approaches, which is complemented by other main steps. We also show experimental data and results to illustrate the practical applications of our new DL approaches.