Invited Keynotes

Marco Aldinucci

Full Professor at the Computer Science Department, University of Torino, Italy
Director of  HPC Key Technologies and Tools national lab of the National Interuniversity Consortium for Informatics

Tools for the HPC-AI convergence: the StreamFlow workflow system and its applications for COVID-19

Abstract: Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments and that makes it possible to execute multiple sites not sharing a common data space. StreamFlow supports both task and data parallelism and enables the reproducible and scalable execution of workflows, such as AI pipelines, in hybrid cloud-HPC environments. As a running example, we use the novel “universal COVID-19 pipeline” that explore the whole optimisation space of the training of different DNNs to classify COVID-19 lung lesions.

Bill McColl

Director of Future Computing Systems Lab, Huawei Zurich Research Center

Heterogeneous Hyperscale Computing

Abstract: New heterogeneous hyperscale clusters will provide the compute power required to drive future innovation in HPC, AI, Cloud and Big Data, through massive, flexible and independently scalable pools of resources (CPUs, accelerators, storage). In this talk I will outline some of the research challenges in developing a theoretical foundation to guide the design, analysis and development of new architectures for heterogeneous hyperscale computing. The foundation needs to encompass not only the hardware for such architectures, but also the software, algorithms and applications for these new architectures. A central challenge is to develop a new bridging model for heterogeneous parallel architectures that can play a unifying role in the same way as the Von Neumann model has provided a unifying model and framework for sequential computing. In this talk I will outline such a model.