Difference between revisions of "Events"
(→Visitors) |
(→Professor Christos Antonopoulos) |
||
Line 83: | Line 83: | ||
== Professor Christos Antonopoulos == | == Professor Christos Antonopoulos == | ||
+ | |||
+ | '''When''':June 25, 2015, 10:30AM | ||
+ | |||
+ | '''Where''':E & CS Auditorium, First Floor | ||
+ | |||
+ | '''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing | ||
+ | |||
+ | '''Email''':cda@inf.uth.gr | ||
+ | |||
+ | '''Homepage''': http://www.inf.uth.gr/~cda | ||
+ | |||
+ | '''ABSTRACT''' | ||
+ | |||
+ | A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns. Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware. We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end. | ||
+ | <li style="display: inline-block;"> | ||
+ | [[File: Antonopoulos.jpg|frameless|left|100px]] | ||
+ | '''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. </li> | ||
== Professor Yongjie Jessica Zhang == | == Professor Yongjie Jessica Zhang == |
Revision as of 11:53, 26 March 2018
Contents
Visitors
Professor Dimitrios S. Nikolopoulos
School of Electronics, Electrical Engineering and Computer Science
Queen's University of Belfast, UK
When:Nov 12,2015, 10:30AM
Where:E & CS Auditorium, First Floor
What:New Approaches to Energy-Efficient and Resilient HPC
Email:d.nikolopoulos@qub.ac.uk
Homepage: http://www.cs.qub.ac.uk/~D.Nikolopoulos/
ABSTRACT
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.
Professor Lieber, Baruch Barry
Department of Neurosurgery
Stony Brook University
When:Nov. 6, 2015, 10:30AM
Where: E & CS Auditorium, First Floor
What:Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical
Email: Baruch.Lieber@stonybrookmedicine.edu
Homepage:http://neuro.stonybrookmedicine.edu/about/faculty/lieber
ABSTRACT
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.
Professor Marek Behr
Chair for Computational Analysis of Technical
RWTH Aachen University
Systems, Schinkelstr. 2, 52062 Aachen, Germany
When:July 31, 2015, 10:30AM
Where: E & CS Auditorium, First Floor
What:Enhanced Surface Definition in Moving-Boundary Flow Simulation
Email: behr@cats.rwth-aachen.de
Homepage:http://www.cats.rwth-aachen.de
ABSTRACT
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, "Numerical shape optimization as an approach to extrusion die design", Finite Elements in Analysis and Design, 61, 35–43 (2012).
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, "NURBS-Enhanced Finite Element Method (NEFEM)", International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, "On the Usage of NURBS as Interface Representation in Free-Surface Flows", International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).
[4] M. Behr, "Simplex Space-Time Meshes in Finite Element Simulations", International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).
Professor Christos Antonopoulos
When:June 25, 2015, 10:30AM
Where:E & CS Auditorium, First Floor
What: Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing
Email:cda@inf.uth.gr
Homepage: http://www.inf.uth.gr/~cda
ABSTRACT
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns. Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware. We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.