A b s t r a c t s
of Keynote Speakers

Ewa Deelman
USC Information Sciences Institute, USA

Science Impact of Sustained Cyberinfrastructure: The Pegasus Example

Abstract:

The recent multi-messenger observation triggered by LIGO and VIRGO’s first detection of gravitational waves produced by colliding neutron stars is a clear display of the increasing impact of dependable research cyberinfrastructure (CI) on scientific discovery. Today’s cyberinfrastructure − hardware, software, and workforce − underpins the entire scientific workflow, from data collection at instruments, through complex analysis, to simulation, visualization, and analytics. Besides supporting large facilities such as LIGO, it also provides the educational and research platform for numerous projects, individual researchers and students.
The effort to develop, sustain, and maintain dependable CI is thus a key element in the modern scientific ecosystem. However, it is not developed in vacuum; rather, it benefits from advances in Computer Science and provides a unique laboratory for Computer Science research. Grounded in the challenging and ever-increasing needs of a multitude of scientific applications, it is continuously enhanced and driven to innovate.
The Pegasus project in an example of a cyberinfrastructure effort that enables LIGO and other communities to accomplish their scientific goals. It delivers robust automation capabilities to researchers at the Southern California Earthquake Center (SCEC) studying seismic phenomena, to astronomers seeking to understand the structure of the universe, to material scientists developing new drug delivery methods, and to students seeking to understand human population migration. An example of societal impact is SCEC’s use of Pegasus to generate the world’s first physics-based probabilistic seismic hazard map that provides insight into why earthquakes in the Los Angeles basin can be so destructive. This information can inform civil engineering practices in the area.
This talk describes the challenges of developing and sustaining cyberinfrastructure capabilities that have impact on scientific discovery and that innovate in the changing cyberinfrastructure landscape.

Ewa Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute in 1998. Following a postdoc at the UCLA Computer Science Department she joined the University of Southern California’s Information Sciences Institute (ISI) in 2000, where she is serving as a Research Director and is leading the Science Automation Technologies group. She is also a Research Professor at the USC Computer Science Department and an IEEE Fellow. The USC/ISI Science Automation Technologies group explores the interplay between automation and the management of scientific workflows that include resource provisioning and data management. She pioneered workflow planning for computations executing in distributed environments. Her group has lead the design and development of the Pegasus Workflow Management software and conducts research in job scheduling and resource provisioning in distributed systems, workflow performance modeling, provenance capture, and the use of cloud platforms for science. She is the founder of the annual Workshop on Workflows in Support of Large-Scale Science (WORKS), which is held in conjunction with the SC conference. In 2015, Dr. Deelman received the HPDC Achievement Award for her contributions to the area of scientific workflows.


Piotr Gawron
Institute of Theoretical and Applied Informatics, PAN, Poland

A Landscape of Quantum Machine Learning

Abstract:

Quantum machine learning is a new field of science that is actively growing. In two past years we observe more and more implementations of quantum computers being deployed. Those quantum computers have major limitations: are small in scale, very noisy and controllable only to an limited extend. Therefore researchers are looking for useful applications of these imperfect quantum computers. One of such application might be employing quantum computing for performing machine learning tasks. During the talk a landscape of recent ideas considering quantum machine learning will be presented.


Ladislav Hluchy
Slovak Academy of Sciences, Slovakia

From Scalable, Semantically-based Distributed Computing to Exascale Computing

Abstract:

Complex applications, respectively, complex use cases have been a great motivation for developing new distributed systems. So were the demands to develop a Support Decision System for flood prevention and protection, integrating the most advanced techniques in data processing and management (EU FP5 IST RTD project: datA fusioN for Flood Analysis and decision Support IST-1999-11676, 2000-2003). At that time the Remote Procedure Call technology was used, which was the predecessor to grid technology. II SAS successfully has participated in the solution of this project and it has to be said that this was a historic moment for our motivation for grid technology activities. That was the first project EU FP5 IST RTD project: Development of Grid Environment for Interactive Applications (2002-05) IST-2001-32243 (CrossGrid) with aim to develop, implement and exploit new Grid components for interactive computing and data intensive applications like simulation and visualization for surgical procedures, flooding crisis, team decision support systems, distributed data analysis in high-energy physics, air pollution combined with weather forecasting. In the project EU FP6 RTD IST project: Knowledge-based Workflow System for Grid Applications (2004-2007) FP6-511385 [call IST-2002-2.3.2.8, STREP] we have first time proposed to use the semantic grid term with focus on some selected complex problems. From scalable, semantically-based computing we have gradually moved to exascale computing as part of the 7FP and Horizon 2020 Programmes. In my presentation I will touch the current research activities within the exascale computing and processing big data (extreme data sets) mainly in projects: Integrating and managing services for the European Open Science Cloud (1.1.2018-31.12.2020) H2020-777536 [call H2020-EINFRA-2016-2017], Designing and Enabling E-infrastructures for intensive Processing in a Hybrid DataCloud (1.11.2017-30.4.2020) H2020-777435 [call H2020-EINFRA-2016-2017], PROviding Computing solutions for ExaScale ChallengeS (1.11.2017-31.10.2020) H2020-777533 [call H2020-EINFRA-2016-2017], EU FP7 project: Advanced Data Mining and Integration Research for Europe (2008-2011) FP7-215024 [call FP7-ICT-2007-1], EDA project: European Urban Simulation for Asymmetric Scenarios (2010-2012) A-0938-RT-GC requiring HPC.


Piotr Homola
Institute of Nuclear Physics PAN, Poland

Cosmic-Ray Extremely Distributed Observatory: novel astrophysical potential and beyond

Abstract:

The Cosmic-Ray Extremely Distributed Observatory (CREDO) project uses the hunt for particle cascades from deep space as a vehicle for a unique ‘bottom-up’ approach to scientific research. By engaging the non-specialist public of all ages as ‘citizen scientists’ we create an opportunity for lifelong learning for individuals as well as for cooperation and the sharing of common educational tools amongst institutions. The discoveries of these citizen scientists will feed directly into a pioneering new area of scientific research oriented on Cosmic Ray Ensembles (CRE). The detection (or non-detection) of such particle groups promises to open up a new method for exploring our universe. The opportunities this would create for cross-disciplinary research are significant and beneficial for individuals, networks of institutions and the global communities of both professional scientists and science enthusiasts.


Zbigniew Nawrat
Foundation of Cardiac Surgery Development, Zabrze, Poland and Medical University of Silesia, Katowice, Poland

AI & Robots - Why Robin Heart is a Telemanipulator

Abstract:

The robot consists of mechanical units, actuators, sensors, effectors and a control unit. Control system allows for the robot effector to perform specific tasks. The most challenging are medical tasks, especially surgery.
The Robin Heart surgical robot project conducted by the Foundation of Cardiac Surgery Development, after series of animal and teleoperation experiments entered the phase of preparation the first clinical robot model. The Robin Heart system includes the planning system, training system, experts’ program, as well as tele-manipulators and automatic surgical tools. The family of Robin Heart robot, created from 2000, consists of more than 10 models and prototypes. The biggest Robin Heart mc2 robot replaces three people at the operating table: first and second surgeons, and the assistant holding the visual tracking system.
Tele Robin Heart with new multitools platform (modular type) and modern console with excellent haptic system named Stiff Flop Robin Console have been created and tested during last years. The Robin Heart PortVisionAble dedicated for control the endoscope during the mini invasive surgery (lightweight, mobile robot) offered with telemedic system is prepared for first clinical usage.
Medical robots improve the quality and create an opportunity to introduce new standards because between the diagnosis and the decision, between the decision and action is a computer control system. The role of today's robots so far is not to replace the surgeon, but in a controlled expand its capabilities and improve ergonomics.
Parallel to robotics, artificial intelligence is developing very dynamically. Their combination and synergy will result in robots being autonomous collaborating robots. I will try to explain in the lecture: why Robin Heart is (today) a telemanipulator. Will it be an independent robot? When? Why?

Zbigniew Nawrat is theoretician physicist by profession, who the own whole professional activity associated with the medicine (doctorate – subject artificial heart, habilitation – subject medical robotics). He is an author of pioneer researches in artificial organ domain (Polish ventricular assist device POLVAD was first clinically applied in 1993) and medical robotics (Robin Heart). He is also a pioneer of application of the computer simulation & virtual space technology to surgical planning process. Zbigniew Nawrat works in Medical University of Silesia, where he teaches his author unique in Poland lecture: "Artificial Organs", "Robotics in Healthcare" as well as in Foundation of Cardiac Surgery Development as a director of Institute of Heart Prostheses. He founded the International Society for Medical Robotics in 2010 – president till now. He created and publishes (editor) the journal Medical Robotics Reports. Hobby: Poet, composer, musician (ZAWRATU).


Irena Roterman
Jagiellonian University - Medical College, Poland

Computer Analysis of Misfolding Process of Proteins

Abstract:

Process of protein folding remains the unknown despite of long lasting history of research oriented on prediction of 3D structure on the basis of amino acids sequence. This process occurs permanently on living organisms delivering the correct structures ensuring the biological activity of final products which are correctly folded proteins. However the misfolded proteins identified in human organisms have been identified as the source of pathological phenomena of neurodegenerative character. These misfolded proteins appeared to help the analysis of correctly folded proteins delivering the criteria distinguishing misfolding versus folding process.
The misfolded protein causing the Parkinson disease – amyloid tau – is the object of analysis which appears to satisfy all criteria of typical standard amyloid form. The criteria have been defined using the fuzzy oil drop model to characterize the correctly folded proteins.
The 3D Gauss function introduced to describe the structure of centric hydrophobic core in globular proteins (3D Gauss function) allows identification of local deformation (it appears to be related to biological function of particular protein) and global deformation (identified in mis-folded proteins especially in amyloids). The Kullback-Leibler divergence entropy (comparison of expected versus observed hydrophobicity distribution in protein) is able to express quantitatively the degree and location of these deformations. Application of this analysis allows also identification of globally disordered hydrophobicity distribution which has been identified in amyloids.


Lubos Kolar
Hewlett Packard Enterprise

News and Trends in HPC & AI Platforms

Abstract:

This presentation will cover technologies evolution and the positioning of HPE in HPC market. We will talk about the most actual technologies and platform architectures used in HPC and AI workflows. And there are several very promising technologies going to be available in close future... to allow future horizon visions on path to Exascale computing.


Georgios Kardaras
Intel Corporation

From Edge to Cloud, How Intel Enables Workload Optimization

Abstract:

To address the compute and labor-intensive challenges of the innovation cycle, Intel accelerators can be chosen to match the target workload. Intel FPGAs, in particular, can be dynamically tailored specifically for a specific workload, ensuring optimized performance for the workload at hand. This results in efficient performance and improved performance per watt, low-latency and predictable performance. The Acceleration Stack for Intel Xeon CPU with FPGAs is a robust collection of software, firmware and tools, designed and distributed by Intel, to make it easier to develop and deploy Intel FPGAs for workload optimization in the data center. The Acceleration Stack provides optimized and simplified hardware interfaces and software APIs, saving the developer time so they can focus on the unique value-add of their solution.


Organizers:
AGH University of Science and Technology
AGH University of Science
and Technology
ACC Cyfronet AGH
ACC Cyfronet AGH
Department of Computer Science AGH

Department of Computer
Science AGH