University of Granada, Spain
Google Scholar: https://scholar.google.com/citations?user=HULIk-QAAAAJ&hl=en
Title: Federated Learning for Preserving Data Privacy
Federated Learning (FL) is a machine learning setting where multiple entities (clients) collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead local learning focused updates intended for immediate aggregation are used to achieve the learning objective. This presentation presents the context of federated learning, discusses its key elements, and gives attention to software libraries, communication attacks and outlines current lines of study.
This presentation presents the context of federated learning and why it is necessary. Its key elements are analyzed, and attention is given to software libraries, communication attacks and current lines of study (Keynote slides can be downloaded at the bottom of the page).
Francisco Herrera) received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain. He is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). He's an academician in the Royal Academy of Engineering (Spain). In the seventh edition of Guide2Research's ranking of the top 1000 scientists in the field of computer science and electronics in 2021 he is ranked 19th in the world and number 1 in Spain (receiving more than 103000 citations in Scholar Google, and H-index 155).
He has been the supervisor of 51 Ph.D. students. He has published more than 500 journal papers. He has been nominated as a Highly Cited Researcher (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics). He currently acts as Editor in Chief of the international journal "Information Fusion" (Elsevier). He acts as editorial member of a dozen of journals.
His current research interests include among others, computational intelligence information fusion and decision making, and data science (including data preprocessing, prediction, non-standard classification problems, and big data).
Eindhoven University of Technology, Netherlands
Google Scholar: https://scholar.google.co.uk/citations?user=HhDsD9UAAAAJ&hl=en
Title: Learning how to learn with OpenML
Machine learning aims to perform tasks better based on data and automatically gained experience. Ironically, doing it well often requires a lot of tacit human experience and starting from scratch. What if we could automatically collect experience on how to learn across a wide range of tasks, on a global scale, and spanning many lifetimes? OpenML is an open-source platform for doing exactly this. It allows anyone (and anything) to share machine learning datasets, models, and reproducible experiments. It is integrated into popular machine learning tools to allow easy sharing of models and experiments. It organizes all of this online with rich metadata, and enables anyone to reuse and build on them in novel and unexpected ways. This fosters a budding ecosystem of automated processes that can learn from all shared information on how to build the best machine learning models faster and better over time. We welcome all of you to become a part of it (Keynote slides can be downloaded at the bottom of the page).
Biosketch: Joaquin Vanschoren is an assistant professor at the Eindhoven University of Technology (TU/e). His research focuses on the automation of machine learning (AutoML) and Meta-Learning. He co-authored and co-edited the book 'Automatic Machine: Methods, Systems, Challenges'', published over 100 articles on these topics, and received an Amazon Research Award, Azure Research Award, and the Dutch Data Prize. He founded and leads OpenML.org, an open science platform for machine learning, and is a founding member of the European AI associations ELLIS and CLAIRE. He has been tutorial speaker at NeurIPS and AAAI, and has given more than 20 invited talks. He is datasets and benchmarks chair at NeurIPS 2021 and co-organized the AutoML and Meta-Learning workshop series at NeurIPS and ICML from 2013 to 2021.
University of Illinois at Urbana-Champaign, USA
Google Scholar: https://scholar.google.com/citations?user=A_A_LrsAAAAJ&hl=en
Title: Design of Reconfigurable Computing Systems for Accelerating Smart IoT Applications
Many new IoT (Internet of Things) applications are driven by the fast creation, adaptation, and enhancement of various types of Deep Neural Networks (DNNs). DNNs are computation intensive. Without efficient hardware implementations of DNNs, these promising IoT applications will not be practically realizable. In this talk, we will analyze several challenges facing the AI and IoT community for mapping DNNs to hardware accelerators. Especially, we will evaluate FPGA's role for accelerating DNNs targeting both cloud and edge computing. We will present a series of effective design techniques for implementing DNNs on FPGAs with high performance, energy efficiency and adaptability. These include automated DNN/FPGA co-design, smart reuse of configurable DNN IPs, smart pipeline scheduling, Winograd techniques, and DNN quantization. The design flows developed based on the proposed techniques, such as DNNBuilder, have been adopted by the industry (e.g., IBM and Xilinx). The new DNN models produced, such as SkyNet, have won championships in the competitive DAC System Design Contest for low-power object detection (Keynote slides can be downloaded at the bottom of the page).
Biosketch: Dr. Deming Chen obtained his BS in computer science from University of Pittsburgh, Pennsylvania in 1995, and his MS and PhD in computer science from University of California at Los Angeles in 2001 and 2005 respectively. He joined the ECE department of University of Illinois at Urbana-Champaign in 2005. His current research interests include reconfigurable computing, machine learning and cognitive computing, hybrid cloud, system-level and high-level synthesis, and hardware security. He has given more than 120 invited talks sharing these research results worldwide. He has received 9 Best Paper Awards, a few Best Poster Awards, and numerous other research and service related awards. He is the Donald Willett Faculty Scholar and the Abel Bliss Professor of the Grainger College of Engineering, an IEEE Fellow, an ACM Distinguished Speaker, and the Editor-in-Chief of ACM Transactions on Reconfigurable Technology and Systems (TRETS).