<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://crtc.cs.odu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jbest</id>
		<title>crtc.cs.odu.edu - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://crtc.cs.odu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jbest"/>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/Special:Contributions/Jbest"/>
		<updated>2026-05-05T01:20:30Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.29.1</generator>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5252</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5252"/>
				<updated>2020-03-10T19:03:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* HPC Wahab Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Wahab cluster as of March, 2020:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 158&lt;br /&gt;
| 40&lt;br /&gt;
| none&lt;br /&gt;
| 384 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 18&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia V100 GPU&lt;br /&gt;
| 128 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://docs.hpc.odu.edu/#wahab-hpc-cluster here]&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
The Turing cluster has been the primary shared high-performance computing (HPC) cluster on campus since 2013. Turing is based on 64-bit Intel Xeon microprocessor architectures, and each node has up to 32 cores and at least 128 GB of memory. As of May 2017, Turing cluster has 6300 cores available to researchers for computational needs. Researchers have access to several high memory nodes (512–768 GB), nodes with NVIDIA graphical processing units (GPUs) of varying generation: K40, K80, P100, as well as the state-of-the-art V100 (Volta). There is a total of 33 GPUs in Turing. FDR-based (56 Gbps) Infiniband fabric provides the high-speed network for the cluster’s inter-communication. Turing cluster has redundant head nodes for increased reliability and a dedicated login node. EMC’s Isilon storage (1.3 PB total capacity) serves as the home and long-term mass research data storage. In addition, a 180 TB Lustre high-speed parallel filesystem is provided for scratch space. The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2020:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://docs.hpc.odu.edu/#hardware-resources here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5251</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5251"/>
				<updated>2020-03-10T19:02:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* HPC Turing Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Wahab cluster as of March, 2020:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Login&lt;br /&gt;
| 2&lt;br /&gt;
| 20&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 158&lt;br /&gt;
| 40&lt;br /&gt;
| none&lt;br /&gt;
| 384 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 18&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia V100 GPU&lt;br /&gt;
| 128 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
The Turing cluster has been the primary shared high-performance computing (HPC) cluster on campus since 2013. Turing is based on 64-bit Intel Xeon microprocessor architectures, and each node has up to 32 cores and at least 128 GB of memory. As of May 2017, Turing cluster has 6300 cores available to researchers for computational needs. Researchers have access to several high memory nodes (512–768 GB), nodes with NVIDIA graphical processing units (GPUs) of varying generation: K40, K80, P100, as well as the state-of-the-art V100 (Volta). There is a total of 33 GPUs in Turing. FDR-based (56 Gbps) Infiniband fabric provides the high-speed network for the cluster’s inter-communication. Turing cluster has redundant head nodes for increased reliability and a dedicated login node. EMC’s Isilon storage (1.3 PB total capacity) serves as the home and long-term mass research data storage. In addition, a 180 TB Lustre high-speed parallel filesystem is provided for scratch space. The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2020:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://docs.hpc.odu.edu/#hardware-resources here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5250</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5250"/>
				<updated>2020-03-10T19:01:21Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* HPC Wahab Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Wahab cluster as of March, 2020:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Login&lt;br /&gt;
| 2&lt;br /&gt;
| 20&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 158&lt;br /&gt;
| 40&lt;br /&gt;
| none&lt;br /&gt;
| 384 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 18&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia V100 GPU&lt;br /&gt;
| 128 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
The Turing cluster has been the primary shared high-performance computing (HPC) cluster on campus since 2013. Turing is based on 64-bit Intel Xeon microprocessor architectures, and each node has up to 32 cores and at least 128 GB of memory. As of May 2017, Turing cluster has 6300 cores available to researchers for computational needs. Researchers have access to several high memory nodes (512–768 GB), nodes with NVIDIA graphical processing units (GPUs) of varying generation: K40, K80, P100, as well as the state-of-the-art V100 (Volta). There is a total of 33 GPUs in Turing. FDR-based (56 Gbps) Infiniband fabric provides the high-speed network for the cluster’s inter-communication. Turing cluster has redundant head nodes for increased reliability and a dedicated login node. EMC’s Isilon storage (1.3 PB total capacity) serves as the home and long-term mass research data storage. In addition, a 180 TB Lustre high-speed parallel filesystem is provided for scratch space. The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5229</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5229"/>
				<updated>2020-03-06T18:45:48Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
The Turing cluster has been the primary shared high-performance computing (HPC) cluster on campus since 2013. Turing is based on 64-bit Intel Xeon microprocessor architectures, and each node has up to 32 cores and at least 128 GB of memory. As of May 2017, Turing cluster has 6300 cores available to researchers for computational needs. Researchers have access to several high memory nodes (512–768 GB), nodes with NVIDIA graphical processing units (GPUs) of varying generation: K40, K80, P100, as well as the state-of-the-art V100 (Volta). There is a total of 33 GPUs in Turing. FDR-based (56 Gbps) Infiniband fabric provides the high-speed network for the cluster’s inter-communication. Turing cluster has redundant head nodes for increased reliability and a dedicated login node. EMC’s Isilon storage (1.3 PB total capacity) serves as the home and long-term mass research data storage. In addition, a 180 TB Lustre high-speed parallel filesystem is provided for scratch space. The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5226</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5226"/>
				<updated>2020-03-06T16:59:53Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* HPC Turing Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
The Turing cluster has been the primary shared high-performance computing (HPC) cluster on campus since 2013. Turing is based on 64-bit Intel Xeon microprocessor architectures, and each node has up to 32 cores and at least 128 GB of memory. As of May 2017, Turing cluster has 6300 cores available to researchers for computational needs. Researchers have access to several high memory nodes (512–768 GB), nodes with NVIDIA graphical processing units (GPUs) of varying generation: K40, K80, P100, as well as the state-of-the-art V100 (Volta). There is a total of 33 GPUs in Turing. FDR-based (56 Gbps) Infiniband fabric provides the high-speed network for the cluster’s inter-communication. Turing cluster has redundant head nodes for increased reliability and a dedicated login node. EMC’s Isilon storage (1.3 PB total capacity) serves as the home and long-term mass research data storage. In addition, a 180 TB Lustre high-speed parallel filesystem is provided for scratch space. The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5225</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5225"/>
				<updated>2020-03-06T16:56:20Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Network Communication Infrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
We offer a heterogeneous computing environment that primarily consists of Windows and *nix based workstations and servers.  On the Windows domain, users are offered network logons, Exchange email, terminal services via our Virtual Computing Lab (VCLab) where users can have access to our software remotely, roaming profiles, MSSQL database access for research, and Hyper-V virtualization for research/faculty projects. For Unix and Linux users we support Solaris, Ubuntu and Red Hat Enterprise Linux (RHEL) distributions.  Our *nix services include DNS, NIS, Unix mail, access to personal MySQL databases, class and research project Oracle databases, and both Linux and Unix based FAST aliases for secure shell sessions. In addition to the standard *nix services, High Performance Computing resources are offered to users in the form of multiple Intel-based Rocks HPC clusters, which boast high-speed Infiniband QDR interconnects and top out at a combined 3.5 TFLOPS. A Beowulf cluster is available for use in distributed computing classes. We also offer several GPU servers utilizing the newest CUDA paradigms, and a virtual Symmetric Multi-Processor (SMP) server with 64 physical cores and 512Gb of memory. &lt;br /&gt;
Storage for a majority of these resources is redundantly provided via two EMC Celerra NAS devices, one located in each datacenter. This design allows for replication of storage across the network to ensure high availability.  These systems provide a combined total of 100Tb of storage and dozens of file systems. Users are provided with CIFS and NFS mounts for use in both windows and *nix environments.  We also use these devices to provide iSCSI targets for our VM environments. Research users are allocated storage based on project needs and availability. All user data is backed up multiple times per day as snapshots on our EMC devices and maintained onsite for up to two years on tape. &lt;br /&gt;
Additional services provided include, but are not limited to, user web pages, on-demand virtual machines through our Cloud services, copy and print services, audio-visual broadcasting and recording, teleconferencing, and 24/7 end user helpdesk and support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5224</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5224"/>
				<updated>2020-03-06T16:54:23Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Network Communication Infrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Odunetwork.png |frameless|right|350px]]&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms. Various VLANs are used to segment the network, isolate traffic, and enforce security policies.  Our 100% wireless coverage allows users to take advantage of A, B, G, and N secure connections from either of our buildings. VPN access is available for remote users to access services on our network.  All departmental telephone communication is provided via VoIP Avaya phone systems.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: E-LITE.png |thumb|left|350px|'''Diagram of E-LITE regional network serving the Southeastern Virginia universities and research institutions''']]&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5223</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5223"/>
				<updated>2020-03-06T16:48:01Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* HPC Turing Cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Turing.png|thumb|left|250px| '''Turing Cluster''']]&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms.   &lt;br /&gt;
&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:Turing.png&amp;diff=5222</id>
		<title>File:Turing.png</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:Turing.png&amp;diff=5222"/>
				<updated>2020-03-06T16:42:03Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:Odunetwork.png&amp;diff=5221</id>
		<title>File:Odunetwork.png</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:Odunetwork.png&amp;diff=5221"/>
				<updated>2020-03-06T16:41:48Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:E-LITE.png&amp;diff=5220</id>
		<title>File:E-LITE.png</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:E-LITE.png&amp;diff=5220"/>
				<updated>2020-03-06T16:41:29Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5219</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5219"/>
				<updated>2020-03-06T16:39:49Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Network Communication Infrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms.   &lt;br /&gt;
&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5218</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5218"/>
				<updated>2020-03-06T16:39:22Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Network Communication Infrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms.   &lt;br /&gt;
&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;br /&gt;
&lt;br /&gt;
In order to support low-friction of research data transfer and distributed research activities, Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) DWDM infrastructure, which provides redundant 10 Gbps connectivity to a number of institutions in Hampton Roads area, to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, the Virginia Modeling, Analysis, and Simulation Center (VMASC), Christopher Newport University, and several National Oceanic and Atmospheric Administration (NOAA) sites. The E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10 Gbps point to multipoint connectivity between the member sites and other regional and national networks like MARIA (Mid-Atlantic Research Infrastructure Alliance), Energy Science Network, and Internet2. The E-LITE infrastructure is upgradable to support 100 Gbps link capacities.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5217</id>
		<title>Facilities</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Facilities&amp;diff=5217"/>
				<updated>2020-03-06T16:30:57Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
One of the most notable research programs associated with Old Dominion University is the Center for Real Time Computing (CRTC). The purpose of the CRTC is to pioneer advancements in real-time and large-scale physics-based modeling and simulation computing utilizing quality mesh generation. Since its inception, the CRTC has explored the use of real-time computational technology in Image Guided Therapy, storm surge and beach erosion modeling, and Computational Fluid Dynamics simulations for complex Aerospace applications. The center and its distinguished personnel accomplish their objectives through rigorous theoretical research (which often involves the use of powerful computers) and dynamic collaboration with partners like Harvard Medical School and NASA Langley Research Center in US and Center for Computational Engineering Science (CCES) RWTH Aachen University in Germany and Neurosurgical Department of Huashan Hospital Shanghai Medical College, Fudan University in China. This research is mainly funded from government agencies like ational Science Foundation, National Institute of Health and NASA and philanthropic organizations like John Simon Guggenheim Foundation.&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Nikos_Office.png|frameless|left]]&lt;br /&gt;
The CRTC is currently under the direction of Professor Nikos Chrisochoides, who has been the Richard T. Cheng Chair Professor at Old Dominion University since 2010. Dr. Chrisochoides’ work in parallel mesh generation and deformable registration for image guided neurosurgery has received international recognition. The algorithms and software tools that he and his colleagues developed are used in clinical studies around the world with more than 40,000 downloads. He has also received significant funding through the National Science Foundation for his innovative research in parallel mesh generation.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== CRTC Lab &amp;amp; Resources ==&lt;br /&gt;
To further its mission of fostering research, Old Dominion University has provided the Center for Real Time Computing with lab space in its Engineering and Computational Sciences Building. The CRTC utilizes the lab space and the Department of Computer Science’s other resources to conduct its studies. The principal investigators (PIs) who lead research projects at the CRTC Lab have access to a Dell Precision T7500 workstation, featuring a Dual Six Core Intel Xeon Processor X5690 (total of 12 cores). The processor has a clock speed of 3.46GHz, a cache of 12MB, and QPI speed of 6.4GT/s. The processor also supports up to 96GB of DDR3 ECC SDRAM (6X8GB) at 1333MHz. The system is augmented by the nVIDIA Quadro 6000. With 6 GB of memory, this device provides stunning graphic capabilities. The PIs also have command of an IBM server funded from a NSF MRI award (CNS-0521381), as well as access to the Blacklight system at the Pittsburg Supercomputing Center.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Community Outreach ==&lt;br /&gt;
[[File:Lab_Space_Outreach.png|frameless|left]]&lt;br /&gt;
In addition to research, the lab space and resources of the CRTC may be used for outreach and education activities. Students from the local high school community have visited the lab to view its state-of-the-art equipment and discuss computer science topics with distinguished experts. To continue its outreach to the community, the CRTC will soon make its IBM server available to high school students wishing to gain experience in high performance computing. By granting controlled access of its equipment to interested high school students, the CRTC provides them with an exceptional introduction to computer science work and research, without jeopardizing other research projects. The CRTC also possesses a 3D visualization system, which it uses in its outreach/education programs. This high-quality, interactive system is especially motivating and exciting to high school students stimulated by multi-media.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information Technology Services (ITS) ==&lt;br /&gt;
Old Dominion University maintains a robust, broadband, high-speed communications network and High Performance Computing (HPC) infrastructure. The facility utilizes 3200 square feet of conditioned space to accommodate server, core networking, storage, and computational resources. The data center has 100+ racks deployed in alternating hot and cold aisle configuration. The data center facility is on raised flooring with minimized obstruction to help facilitate optimized air flow.  Some of the monitoring software’s being utilized in the operations center are Solarwinds ORION network performance software and Nagios Infrastructure monitoring application. The IT Operations center monitors the stability and availability of about 400 production servers (physical and virtual), close to 400 network infrastructure switching and routing devices, enterprise storage, and high performance computing resources.&lt;br /&gt;
&lt;br /&gt;
The network is currently comprised of a meshed Ten Gigabit Ethernet backbone supporting voice, data and video with switched 10Gbps connections to the servers and 1Gbps connections to the desktops. Inter-building network connectivity consists of redundant fiber optic data channels yielding high-speed Gigabit connectivity, with Ten-Gigabit connectivity for key building on campus. Ongoing upgrades to Inter-building networks will result in data speeds of 10Gbps for the entire campus. ITS currently provides a variety of Internet services, including 1Gbps connection to Cox communication, 2Gbps connection to Cogent. Connections to Internet2 and Cogent are over a private DWDM regional optical network infrastructure, with redundant 10Gbps links to MARIA aggregation nodes in Ashburn, Virginia and Atlanta, Georgia. The DWDM infrastructure project named ELITE (Eastern Lightwave Internetworking Technology Enterprise) provides access not only to the commodity Internet but gateways to other national networks to include the Energy Science Network and Internet2.&lt;br /&gt;
&lt;br /&gt;
== HPC Turing Cluster ==&lt;br /&gt;
The University supports research computing with parallel computing using MPI and OpenMP protocols on compute cluster architectures with shared memory and symmetric multiprocessing compute nodes. Old Dominion University has a high-performance computing cluster named Turing. Researchers have access to high memory nodes and nodes with Xeon Phi co-processors. FDR based infiniband infrastructure provides the communication path for the cluster inter communication. Mass storage is integrated in this cluster at 20Gbps and scratch space is accessible over FDR based infiniband infrastructure. Turing cluster has redundant head nodes and login nodes for increased reliability. The Turing cluster is primarily used by faculty members who are conducting research using software such as Ansys, Comsol, R, Mathematics, and Matlab among other software’s. Integrated in Turing cluster is a number of GPU nodes with NVidia Tesla M2090 GPU’s, to help facilitate computation that requires graphic processors.&lt;br /&gt;
&lt;br /&gt;
Below are the specifications of the Turing cluster as of March, 2019:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| &amp;lt;b&amp;gt;Node Type&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Total Available Nodes&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Maximum Slots (Cores) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Additional Resource&amp;lt;/b&amp;gt;&lt;br /&gt;
| &amp;lt;b&amp;gt;Memory (RAM) per node&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Standard Compute&lt;br /&gt;
| 220&lt;br /&gt;
| 16 - 32&lt;br /&gt;
| none&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| GPU&lt;br /&gt;
| 21&lt;br /&gt;
| 28 - 32&lt;br /&gt;
| Nvidia K40, K80, P100, V100 GPU(s)&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| Xeon Phi&lt;br /&gt;
| 10&lt;br /&gt;
| 20&lt;br /&gt;
| Intel 2250 Phi MICs&lt;br /&gt;
| 128 GB&lt;br /&gt;
|-&lt;br /&gt;
| High Memory&lt;br /&gt;
| 7&lt;br /&gt;
| 32&lt;br /&gt;
| none&lt;br /&gt;
| 512 GB - 768 GB&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
More details can be found [https://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing here]&lt;br /&gt;
&lt;br /&gt;
EMC’s Isilon storage is the primary storage platform for the high-performance computing environment. The storage environment provides home and mass storage for the HPC environment with a total capacity of over 1 PB. The storage platform provides scale out NAS storage that delivers increased performance for file based data applications and workflows. In addition EMC’s VNX storage platform is the primary storage environment on campus for virtualized server environments as well as campus data enterprise shares. EMC’s VNX platform is a tiered, scalable storage environment for file, block and object storage. This storage solution is deployed in the enterprise data center with the associated controller, disk, network and power redundancy.&lt;br /&gt;
&lt;br /&gt;
Data Center HVAC Solution consist of has three (3) 30 Ton HVAC units deployed in an N+1 redundancy deployment. Racks of server and computational hardware are arranged in alternating hot and cold aisle configuration. The HVAC units are deployed on a raised floor arrangement with perforated tiles in the cold aisles which allows for superior environmental controls and maintaining the data center at the desired and optimal temperature levels. Optimized performance of chillers in data center is critical for environment control and for this reason the main data center has a 45 Ton chiller installed to facilitate ventilation and air conditioning. In addition ITS has an additional fourteen (14) above the rack cooling units complement the main HVAC units. These above the rack cooling units do not take any additional rack space in the data center. These units are designed to draw hot air from the computational equipment racks and hot aisles and then dissipate conditioned cold air down the cold aisle. This solution provides for an energy efficient cooling solution with zero floor space requirements.&lt;br /&gt;
&lt;br /&gt;
== HPC Wahab Cluster ==&lt;br /&gt;
Wahab is a reconfigurable HPC cluster based on OpenStack architecture to support several types of computational research workloads. The Wahab cluster consists of 158 compute nodes and 6320 computational cores using Intel’s “Skylake” Xeon Gold 6148 processors (20 CPU cores per chip; 40 cores per node). Each compute node has 384 GB of RAM, and 18 accelerator compute nodes, each of which is equipped with four NVIDIA’s V100 graphical processing units (GPU). A 100Gbps EDR Infiniband high-speed interconnect provides low-latency, high-bandwidth communication between nodes to support massively parallel computing as well as data-intensive workloads. Wahab is equipped with a dedicated high-performance Lustre scratch storage (350 TB usable capacity) and is connected to the 1.2 PB university-wide home/long-term research data networked filesystem. The Wahab cluster also contains 45 TB of storage blocks that can be provisioned for user data in the virtual environment. The relative proportion of these resources can be adjusted depending on the needs of the research community.&lt;br /&gt;
&lt;br /&gt;
== HPC Hadoop Cluster ==&lt;br /&gt;
The six-node Hadoop cluster is dedicated for big data analytics. Each of the six data nodes is equipped with 1.3 TB solid-state disk (SSD) and 128 GB of RAM for maximum processing performance. Software such as Hadoop MapReduce and Spark are available for research uses on this cluster.&lt;br /&gt;
&lt;br /&gt;
== Network Communication Infrastructure ==&lt;br /&gt;
Old Dominion University network communication infrastructure is designed using the state of the art networking and switching hardware platforms. The campus infrastructure backbone is fully redundant and capable of 10Gbps data rates between all distribution modules. The data center infrastructure is designed to operate at 40Gbps data rates between the server and storage platforms.   &lt;br /&gt;
&lt;br /&gt;
DWDM E-LITE Infrastructure Old Dominion University manages the Eastern Lightwave Integrated Technology Enterprise (E-LITE) infrastructure, which provides 10Gbps connectivity to a number of regional institutions to include the College of William &amp;amp; Mary, Jefferson Lab, Old Dominion University, and the Virginia Modeling, Analysis, and Simulation Center (VMASC). E-LITE infrastructure is designed in a physical ring around the Hampton Roads area providing protected 10Gbps connectivity between the member sites and other national networks like MARIA, Energy Science Network and Internet2. E-LITE network and connectivity to MARIA is being redesigned to upgrade the local DWDM ring to be 100Gbps capable as well as establishment of 100Gbps connection to Internet2.  Old Dominion University recently completed a major upgrade on the core server distribution to integrate Nexus 7000 hardware. Nexus 7000 platforms are Cisco Systems next generation switching platforms that are designed for the data center to provide virtualized hardware, in-service upgrades, higher 10Gbps and 40Gbps density, higher performance and reliability. These platforms also provide capability to integrate 100Gbps interfaces in the data center infrastructure as needed. Cisco Nexus platforms include 7000 and 5000 series that provide a higher bandwidth and reliable backbone infrastructure for critical services using technologies such as virtual port channels.&lt;br /&gt;
&lt;br /&gt;
Data Center UPS Batteries for HPC and Network infrastructure consist of a (uninterrupted power supply) UPS system rated at 375KWatts. This unit allows for considerable capacity needed for switching between commercial electrical power and dedicated building power generator. The current UPS system utilizes high performance insulated gate bipolar transistors to provide for larger power capabilities, high speed switching and lower control power consumption.&lt;br /&gt;
&lt;br /&gt;
Campus Virtualized Network Infrastructure. The virtualized network infrastructure supports the unique requirements of University business operations, research, scholarly activities, and online course delivery.  Course delivery technologies include video streaming and video conferencing.  The Campus Network Virtualization is an initiative that was implemented in the campus environment  to make sure we enable our network infrastructure to provide the following features: (i) Communities of interests (Virtual Networks). This will allow us to create network based user communities that have the same functions and communication/application needs. This is being accomplished by using MPLS technology. (ii) High performance and redundant security infrastructure. Security is an important part of any network infrastructure. We have to ensure that users are able to perform all their needed tasks on the network while at the same time have the best possible security protection in place.  (iii) Flexibility to provision independent network infrastructures. This feature allows us to create smaller independent logical networks on the existing physical infrastructure. This is of great benefit in a research institution of ODU’s stature and will allow us to work with researchers to provide them the needed resources for their success.&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4930</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4930"/>
				<updated>2020-02-24T19:27:13Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Adaptivity and Smoothness to CBC3D [In Progress] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Paraview plugin [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
Paraview Python [https://www.paraview.org/Wiki/ParaView/Plugin_HowTo Plugin]&lt;br /&gt;
# CNF project:visualize the meshes in 3d&lt;br /&gt;
# Take weight of tetrahedral and plot &lt;br /&gt;
# User should have the ability to choose the axis to collect weight&lt;br /&gt;
&lt;br /&gt;
== Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
=== Gradation ===&lt;br /&gt;
'''Method 1'''&lt;br /&gt;
# First I need to modify the CBC3D.cxx/main.cxx to be able to read an additional segmented image or an Euclidian Distance Transform (EDT). Can use ReadImage function for this (see Utilities_CBC3D.h). &lt;br /&gt;
# Then I need to add the input EDT to the list of EDT’s used for mesh refinement. &lt;br /&gt;
# The list of EDT’s from the standard input segmented image is computed in function: ComputeMaurerDistance ImagesAndInterpolators (itkBCCMeshFilter.cxx). &lt;br /&gt;
# Then the method should refine those additional artificial boundaries to achieve element gradation in regions (e.g., high and low concentration) other than the standard boundaries/interfaces.&lt;br /&gt;
&lt;br /&gt;
'''Method 2'''&lt;br /&gt;
# Adapt the sizing function to work with CBC3D&lt;br /&gt;
&lt;br /&gt;
== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4929</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4929"/>
				<updated>2020-02-24T19:18:58Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Adaptivity and Smoothness to CBC3D [In Progress] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Paraview plugin [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
Paraview Python [https://www.paraview.org/Wiki/ParaView/Plugin_HowTo Plugin]&lt;br /&gt;
# CNF project:visualize the meshes in 3d&lt;br /&gt;
# Take weight of tetrahedral and plot &lt;br /&gt;
# User should have the ability to choose the axis to collect weight&lt;br /&gt;
&lt;br /&gt;
== Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
=== Gradation ===&lt;br /&gt;
# First I need to modify the CBC3D.cxx/main.cxx to be able to read an additional segmented image or an Euclidian Distance Transform (EDT). Can use ReadImage function for this (see Utilities_CBC3D.h). &lt;br /&gt;
# Then I need to add the input EDT to the list of EDT’s used for mesh refinement. &lt;br /&gt;
# The list of EDT’s from the standard input segmented image is computed in function: ComputeMaurerDistance ImagesAndInterpolators (itkBCCMeshFilter.cxx). &lt;br /&gt;
# Then the method should refine those additional artificial boundaries to achieve element gradation in regions (e.g., high and low concentration) other than the standard boundaries/interfaces.&lt;br /&gt;
&lt;br /&gt;
== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4264</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4264"/>
				<updated>2019-10-28T19:09:39Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Paraview plugin [In Progress] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Paraview plugin [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
Paraview Python [https://www.paraview.org/Wiki/ParaView/Plugin_HowTo Plugin]&lt;br /&gt;
# CNF project:visualize the meshes in 3d&lt;br /&gt;
# Take weight of tetrahedral and plot &lt;br /&gt;
# User should have the ability to choose the axis to collect weight&lt;br /&gt;
&lt;br /&gt;
== Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4263</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=4263"/>
				<updated>2019-10-28T18:41:38Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Paraview plugin [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# CNF project:visualize the meshes in 3d&lt;br /&gt;
# Take weight of tetrahedral and plot &lt;br /&gt;
# User should have the ability to choose the axis to collect weight &lt;br /&gt;
&lt;br /&gt;
== Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4034</id>
		<title>Events</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4034"/>
				<updated>2019-10-08T22:59:02Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Professor Dimitrios S. Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= CS Seminars =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
'''Date:''' October 10, 2019&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries &lt;br /&gt;
&lt;br /&gt;
'''Abstract:'''&lt;br /&gt;
To address scaling limitations of future hardware, computing systems turned to parallelism and distribution. Most of the software and applications in science and engineering, but also applications that we use in our daily lives are actually distributed programs with some components running on edge or IoT devices to serve clients, data collectors or actuators, and other components running on data centers to provide data analytics, simulation, or visualization. The disaggregation of computing services raises new challenges for system challenges. We explores two of these challenges in this talk and discuss some solutions. The first challenge is that many applications necessitate low latency and more analytical power at or near the data sources. We demonstrate a system called TAPAS, which is neural network architecture search exploration engine. TAPAS uses aggressive compression, approximation and learning techniques to avoid entirely the simulation process in exploring neural network architectures. It further uses learning methods to adapt immediately to unseen data sets. TAPAS  runs on a single low-power GPU and can train over 1,000 networks per second. This makes TAPAS suitable for training machine learning models on edge devices with limited resources. The second challenge is the one of scaling the performance and energy-efficiency of the hardware used in the Cloud and the Edge beyond current boundaries. We explore a co-designed compiler/OS/firmware system for characterizing hardware operating boundaries and safely operating hardware outside those boundaries to gain performance at the expense of additional, yet infrequent errors and mitigating actions. We demonstrate that many applications are inherently resilient to extended hardware boundaries and indeed benefit substantially from boundary relaxation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|frameless|left|200px]]&lt;br /&gt;
'''Bio''': Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors. He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prof. Anastasia Angelopoulou ==&lt;br /&gt;
'''Date:''' TBD, 2020&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Serious Games and Simulations: applications, challenges and future directions &lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' Serious games and simulations have been steadily increasing their&lt;br /&gt;
use in many sectors of society, particularly in education, defense, science and health. Their main purpose is usually to educate or train the users. In this talk, I will present my work in the area of serious games and simulations for training. I will also discuss challenges in the serious games development and future directions to overcome them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Anastasia.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Anastasia Angelopoulou is an Assistant Professor in Simulation and Gaming at the TSYS School of Computer Science at Columbus State University (CSU). Prior to joining CSU, she was a postdoctoral associate at the Institute for Simulation and Training at University of Central Florida (2016-2018), where she obtained her MSc and PhD in Modeling and Simulation (2015). Her research interests lie in the areas of modeling and simulation and serious games and their applications in domains such as healthcare, military, energy, and education, among others. Her research work has been partially supported by the Office of Naval Research and the National Science Foundation (NSF). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dr. Daniele Panozzo ==&lt;br /&gt;
'''Date:''' TBD, 2020 &lt;br /&gt;
&lt;br /&gt;
'''Title:''' Black-Box Analysis&lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' The numerical solution of partial differential equations (PDE) is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering. Ideally, a PDE solver should be a “black box”: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. &lt;br /&gt;
&lt;br /&gt;
I will present an integrated pipeline, considering meshing and element design as a single challenge, that makes the tradeoff between mesh quality and element complexity/cost local, instead of making an a priori decision for the whole pipeline. I will demonstrate that tackling the two problems jointly offers many advantages, and that a fully black-box meshing and analysis solution is already possible for heat transfer and elasticity problems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Daniele.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Dr. Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). Daniele earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). He received the EUROGRAPHICS Young Researcher Award in 2015 and the NSF CAREER Award in 2017. Daniele is leading the development of libigl (https://github.com/libigl/libigl), an award-winning (EUROGRAPHICS Symposium of Geometry Processing Software Award, 2015) open-source geometry processing library that supports academic and industrial research and practice. Daniele is chairing the Graphics Replicability Stamp (http://www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results. His research interests are in digital fabrication, geometry processing, architectural geometry, and discrete differential geometry.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Visitors =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
School of Electronics, Electrical Engineering and Computer Science  &lt;br /&gt;
&lt;br /&gt;
Queen's University of Belfast, UK&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov 12,2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': New Approaches to Energy-Efficient and Resilient HPC  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:d.nikolopoulos@qub.ac.uk d.nikolopoulos@qub.ac.uk]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cs.qub.ac.uk/~D.Nikolopoulos/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Nikolopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios S. Nikolopoulos is Professor in the School of EEECS, at Queen's University of Belfast and a Royal Society Wolfson Research Fellow. He holds the Chair in High Performance and Distributed Computing and directs the HPDC Research Cluster, a team of 20 academic and research staff. His research explores scalable computing systems for data-driven applications and new computing paradigms at the limits of performance, power and reliability. Dimitrios received the NSF CAREER Award, the DOE CAREER Award, and the IBM Faculty Award during an eight-year tenure in the United States. He has also been awarded the SFI-DEL Investigator Award, a Marie Curie Fellowship, a HiPEAC Fellowship, and seven Best Paper Awards including some from the leading IEEE and ACM conferences in HPC, such as SC, PPoPP, and IPDPS. His research has produced over 150 top-tier outputs and has received extensive (£10.6m as PI/£39.5m as CoI) and highly competitive research funding from the NSF, DOE, EPSRC, SFI, DEL, Royal Academy of Engineering, Royal Society, European Commission and private sector. Dimitrios is a Fellow of the British Computer Society, Senior Member of the IEEE and Senior Member of the ACM. He earned a PhD (2000) in Computer Engineering and Informatics from the University of Patras. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Lieber, Baruch Barry ==&lt;br /&gt;
Department of Neurosurgery  &lt;br /&gt;
&lt;br /&gt;
Stony Brook University&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov. 6, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:Baruch.Lieber@stonybrookmedicine.edu Baruch.Lieber@stonybrookmedicine.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://neuro.stonybrookmedicine.edu/about/faculty/lieber &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: LieberB.jpg|frameless|left|125px |'''Professor  Lieber, Baruch Barry''' ]]&lt;br /&gt;
'''Bio:'''  Barry Lieber attended Tel-Aviv University and received a B.Sc. in Mechanical Engineering in 1979. He then attended Georgia Tech and received M.Sc. in 1982 and a Ph.D. in 1985, both in Aerospace Engineering Ph.D. working with Dr. Don P. Giddens. Barry Lieber was a Postdoctoral Fellow from 1985-1987 at the Department of Mechanical Engineering at Georgia Tech and also completed a summer fellowship at Imperial College London in 1986. In 1987 Barry Lieber joined the faculty of the Department of Mechanical and Aerospace Engineering at the State University of New York at Buffalo as Assistant Professor. In 1993 he was promoted to the rank of Associate Professor with tenure and in 1998 was promoted to full professor. In 1994 he became Research Professor of Neurosurgery and in 1997 he became the Director of the Center for Bioengineering at the State University of New York at Buffalo, both position he held until his departure from the university in 2001 to Join the University of Miami as professor in the Department of Biomedical Engineering with a joined appointment in the Department of Radiology. In 2010 he joined the State University of New York at Stony Brook at the rank of professor in the department of Neurosurgery and also serves as program faculty in the department of Biomedical Engineering. Barry Lieber was elected as fellow of the American Institute for Medical and Biomedical Engineering in 1999. He was elected as fellow of the American Society of mechanical Engineers in 2005 and served as the Chairman of the Division of Bioengineering of the American Society of Mechanical Engineers in 2009. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Marek Behr ==&lt;br /&gt;
Chair for Computational Analysis of Technical &lt;br /&gt;
&lt;br /&gt;
RWTH Aachen University&lt;br /&gt;
&lt;br /&gt;
Systems, Schinkelstr. 2, 52062 Aachen, Germany&lt;br /&gt;
&lt;br /&gt;
'''When''': July 31, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Enhanced Surface Definition in Moving-Boundary Flow Simulation&lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:behr@cats.rwth-aachen.de behr@cats.rwth-aachen.de]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cats.rwth-aachen.de&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.&lt;br /&gt;
&lt;br /&gt;
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.&lt;br /&gt;
&lt;br /&gt;
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.&lt;br /&gt;
&lt;br /&gt;
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].&lt;br /&gt;
&lt;br /&gt;
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, &amp;quot;Numerical shape optimization as an approach to extrusion die design&amp;quot;, Finite Elements in Analysis and Design, 61, 35–43 (2012).&lt;br /&gt;
&lt;br /&gt;
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, &amp;quot;NURBS-Enhanced Finite Element Method (NEFEM)&amp;quot;, International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).&lt;br /&gt;
&lt;br /&gt;
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, &amp;quot;On the Usage of NURBS as Interface Representation in Free-Surface Flows&amp;quot;, International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).&lt;br /&gt;
&lt;br /&gt;
[4] M. Behr, &amp;quot;Simplex Space-Time Meshes in Finite Element Simulations&amp;quot;, International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_Marek_Behr1.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio:''' Prof. Marek Behr obtained his Bachelor's and Ph.D. degrees in Aerospace Engineering and Mechanics form the University of Minnesota in Minneapolis. After faculty appointments at the University of Minnesota and at Rice University in Houston, he was appointed in 2004 as a Professor of Mechanical Engineering and holder of the Chair for Computational Analysis of Technical Systems at the RWTH Aachen University. Since 2006, he is the Scientific Director of the Aachen Institute for Advanced Study in Computational Engineering Science, focusing on inverse problems in engineering and funded in the framework of the Excellence Initiative in Germany. Behr advises or has advised over 40 doctoral students, and has published over 65 refereed journal articles and a similar number of conference publications and book chapters. Behr is one of the main developers of the stabilized space-time finite element formulation for deforming-domain flow problems, which has been recently extended to unstructured space-time meshes. He is a long-time expert on parallel computation and large-scale flow simulations and on numerical methods for non-Newtonian fluids. He is a member of several advisory and editorial boards of international journals, and the member of the executive council of the German Association for Computational Mechanics and of the general council of the International Association for Computational Mechanics. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Christos Antonopoulos ==&lt;br /&gt;
Department of Electrical and Computer Engineering, &lt;br /&gt;
&lt;br /&gt;
University of Thessaly, Greece&lt;br /&gt;
&lt;br /&gt;
'''When''': June 25, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:cda@inf.uth.gr cda@inf.uth.gr]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.inf.uth.gr/~cda&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns.  Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware.  We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Antonopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Yongjie Jessica Zhang ==&lt;br /&gt;
Associate Professor in Mechanical Engineering &amp;amp; Courtesy Appointment in Biomedical Engineering&lt;br /&gt;
&lt;br /&gt;
Carnegie Mellon University&lt;br /&gt;
&lt;br /&gt;
'''When''': April 24, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Image-Based Mesh Generation and Volumetric Spline Modeling for Isogeometric Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:jessicaz@andrew.cmu.edu jessicaz@andrew.cmu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.andrew.cmu.edu/~jessicaz&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
With finite element methods and scanning technologies seeing increased use in many research areas, there is an emerging need for high-fidelity geometric modeling and mesh generation of spatially realistic domains.  In this talk, I will highlight our research in three areas: image-based mesh generation for complicated domains, trivariate spline modeling for isogeometric analysis, as well as biomedical, material sciences and engineering applications. I will first present advances and challenges in image-based geometric modeling and meshing along with a comprehensive computational framework, which integrates image processing, geometric modeling, mesh generation and quality improvement with multi-scale analysis at molecular, cellular, tissue and organ scales. Different from other existing methods, the presented framework supports five unique features: high-fidelity meshing for heterogeneous domains with topology ambiguity resolved; multiscale geometric modeling for biomolecular complexes; automatic all-hexahedral mesh generation with sharp feature preservation; robust quality improvement for non-manifold meshes; and guaranteed-quality meshing. These unique capabilities enable accurate, stable, and efficient mechanics calculation for many biomedicine, materials science and engineering applications.&lt;br /&gt;
&lt;br /&gt;
In the second part of this talk, I will show our latest research on volumetric spline parameterization, which contributes directly to the integration of design and analysis, the root idea of isogeometric analysis. For arbitrary topology objects, we first build a polycube whose topology is equivalent to the input geometry and it serves as the parametric domain for the following trivariate T-spline construction. Boolean operations and geometry skeleton can also be used to preserve surface features. A parametric mapping is then used to build a one-to-one correspondence between the input geometry and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement, and applying templates or truncation mechanism couple with subdivision to handle extraordinary nodes. The parametric mapping method has been further extended to conformal solid T-spline construction with the input surface parameterization preserved and trimming curves handled.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Jessica.jpg|frameless|left|120px]]&lt;br /&gt;
'''Bio''': Yongjie Jessica Zhang is an Associate Professor in Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics, all from Tsinghua University, China, and M.Eng. in Aerospace Engineering and Engineering Mechanics, and Ph.D. in Computational Engineering and Sciences from the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element method, isogeometric analysis and their application in computational biomedicine, material sciences and engineering. She has co-authored over 100 publications in peer-reviewed journals and conference proceedings. She is the recipient of Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, USACM Gallagher Young Investigator Award, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, George Tallman Ladd Research Award, and Donald L. &amp;amp; Rhonda Struminger Faculty Fellow. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor David Marcum ==&lt;br /&gt;
Billie J. Ball Professor and  Chief Scientist&lt;br /&gt;
&lt;br /&gt;
Center for Advanced Vehicular Systems, Mechanical Engineering Department, Mississippi State University&lt;br /&gt;
&lt;br /&gt;
'''When''': March 20, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''':  AFLR Unstructured Meshing  Research Activities at CFD Modeling and Simulation Research at the Center for Advanced Vehicular Systems &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:marcum@cavs.msstate.edu marcum@cavs.msstate.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.me.msstate.edu/faculty/marcum/marcum.html &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Mesh generation and associated geometry preparation are critical aspects of any computational field simulation (CFS) process. In particular the mesh used can have a significant impact on accuracy, effectiveness, and efficiency of the CFS solver. Further, typical users spend a considerable portion of their time for the overall effort on mesh and geometry issues. All of this is particularly critical for CFD applications.  AFLR is an unstructured mesh generator designed with a focus on addressing these issues for complex geometries. It is widely used, readily available to Government and Academic users, and has been very successful with relevant problems. AFLR volume and surface meshing is also directly incorporated in several systems, including: DoD CREATE-MG Capstone, Lockheed Martin/DoD ACAD, Boeing MADCAP, MSU SolidMesh, and Altair HyperMesh. In this talk we will provide an overview of this technology, future directions, and plans for multi-tasking/parallel operation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Marcum David.jpg|frameless|left|125px]]&lt;br /&gt;
'''Bio''': Dr. Marcum is Professor of Mechanical Engineering at Mississippi State University (MSU) and Chief Scientist for CFD within the Center for Advanced Vehicular Systems (CAVS). He has 30 years of experience in development of CFD and unstructured grid technology. Before joining MSU in 1991, Dr. Marcum was a Scientist and Senior Engineer at McDonnell Douglas Research Laboratories and Boeing Commercial Airplane Company. He received his Ph.D. from Purdue University in 1985. Prior to that he was a Senior Engineer from 1978 through 1983 at TRW Ross Gear Division. At MSU, Dr. Marcum served as Thrust Leader and Director of the NSF ERC for Computational Field Simulation. As Director, he led the transition from graduated NSF ERC to its current form as the High Performance Computing Collaboratory (HPC²). Dr. Marcum also served as Deputy Director and Director of the SimCenter (an HPC² member center and currently merged within CAVS). He is currently Chief Scientist for CFD within CAVS (also an HPC² member center). As Chief Scientist for CFD, he is directly involved in the research activities of a team of multi-disciplinary researchers working on CFD related projects for DoD, DoE, NASA, NSF, and industry. Computational tools produced by these projects at MSU within the ERC, SimCenter and CAVS, and in particular Dr. Marcum’s AFLR unstructured mesh generator, are in use throughout aerospace, automotive and DoD organizations. Dr. Marcum is widely recognized for his contributions to unstructured grid technology and is currently Honorary Professor at University of Wales, Swansea, UK and a previous Invited Professor at INRIA, Paris-Rocquencourt, France. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Kyle Gallivan ==&lt;br /&gt;
Professor Mathematics Department&lt;br /&gt;
&lt;br /&gt;
Florida State University&lt;br /&gt;
&lt;br /&gt;
'''When''': January 23, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Riemannian Optimization for Elastic Shape Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:kgallivan@fsu.edu kgallivan@fsu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.math.fsu.edu/~gallivan/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
In elastic shape analysis, a representation of a shape is invariant to translation, scaling, rotation and reparameterization and important problems (such as computing the distance and geodesic between two curves, the mean of a set of curves, and other statistical analyses) require finding a best rotation and re-parameterization between two curves. In this talk, I focus on this key subproblem and study different tools for optimizations on the joint group of rotations and re-parameterizations. I will give a brief account of a novel Riemannian optimization approach and evaluate its use in computing the distance between two curves and classification using two public data sets. Experiments show significant advantages in computational time and reliability in performance compared to the current state-of-the-art method.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_marcum.jpg|frameless|left|250px]]&lt;br /&gt;
'''Bio''': Kyle A. Gallivan is a Professor of Mathematics at Florida State University. Gallivan received the Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1983 under the direction of C. W. Gear. He worked on special purpose signal processors in the Government Aerospace Systems Division of Harris Corporation.  He was a research computer scientist at the Center for Supercomputing Research and Development at the University of Illinois from 1985 until 1993 when he moved to the Department of Electrical and Computer Engineering. From 1997 to 2008 he was a member of the Department of Computer Science at Florida State University (FSU) and a member of the Computational Science and Engineering group becoming a full Professor in 1999. He became a Professor of Mathematics at FSU in 2008 and was selected the 2012 Pascal Professor for the Faculty of Sciences of the University of Leiden in the Netherlands. He has been a Visiting Professor at the Catholic University of Louvain in Belgium multiple times through a long-standing research collaboration with colleagues there.&lt;br /&gt;
&lt;br /&gt;
Over the years Gallivan's research has included: design and analysis of high-performance numerical algorithms, pioneering work on block algorithms for numerical linear algebra, performance analysis of the experimental Cedar system, restructuring compilers, model reduction of large scale differential equations, and high-performance codes for application such as ocean circulation, circuit simulation and the codes in the Perfect Benchmark Suite. Gallivan's current main research concerns optimization algorithms on Riemannian manifolds and their use in applications such as shape analysis, statistics, and signal/image processing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Suzanne M. Shontz ==&lt;br /&gt;
Department of Electrical Engineering and Computer Science&lt;br /&gt;
&lt;br /&gt;
University of Kansas&lt;br /&gt;
&lt;br /&gt;
'''When''': November 7, 2014, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''':E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': A parallel log barrier for mesh quality improvement and updating &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:shontz@ku.edu shontz@ku.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://people.eecs.ku.edu/~shontz/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
There are numerous applications in science, engineering, and medicine which require high-quality meshes, i.e., discretizations of the geometry, for use in computational simulations.  For example, meshes have been used to enable accurate prediction of the performance, reliability, and safety of solid propellant rockets.  The movie industry in Hollywood typically employs dynamic meshes in order to animate characters in films.  Large-scale applications often require meshes with millions to billions of elements that are generated and manipulated in parallel.  The advent of supercomputers with hundreds to thousands of cores has made this possible.&lt;br /&gt;
&lt;br /&gt;
The focus of my talk will be on parallel algorithms for mesh quality improvement and mesh untangling.  Such algorithms are needed, for example, when a large-scale mesh deformation is applied and tangled and/or low quality meshes are the result.  Prior efforts in these areas have focused on the development of parallel algorithms for mesh generation and local mesh quality improvement in which only one vertex is moved at a time.  In contrast, we are concerned with the development of parallel global algorithms for mesh quality improvement and untangling in which all vertices are moved simultaneously. I will present our parallel log-barrier mesh quality improvement and untangling algorithms for distributed-memory machines.  Our algorithms simultaneously move the mesh vertices in order to optimize a log-barrier objective function that was designed to improve the quality of the worst quality mesh elements. We employ an edge coloring-based algorithm for synchronizing unstructured communication among the processes executing the log-barrier mesh optimization algorithm.  The main contribution of this work is a generic scheme for global mesh optimization.  The algorithm shows greater strong scaling efficiency compared to an existing parallel mesh quality improvement technique. Portions of this talk represent joint work with Shankar Prasad Sastry, University of Utah, and Stephen Vavasis, University of Waterloo.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_shontz.jpg|frameless|left|150px]]&lt;br /&gt;
'''Bio''':Suzanne M. Shontz is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Kansas. She is also affiliated with the Graduate Program in Bioengineering and the Information and Telecommunication Technology Center.  Prior to joining the University of Kansas in 2014, Suzanne was on the faculty at Mississippi State and Pennsylvania State Universities.  She was also a postdoc at the University of Minnesota and earned her Ph.D. in Applied Mathematics from Cornell University.&lt;br /&gt;
&lt;br /&gt;
Suzanne's research efforts focus centrally on parallel scientific computing, more specifically, the design and analysis of unstructured mesh, numerical optimization, model order reduction, and numerical linear algebra algorithms and their applications to medicine, images, electronic circuits, materials, and other applications.  In 2012, she was awarded an NSF Presidential Early CAREER Award (i.e., NSF PECASE Award) by President Obama for her research in computational- and data-enabled science and engineering.  Suzanne also received an NSF CAREER Award for her research on parallel dynamic meshing algorithms, theory, and software for simulation-assisted medical interventions in 2011 and a Summer Faculty Fellowship from the Office of Naval Research in 2009. She has chaired or co-chaired several top conferences in computational- and data-enabled science and engineering including the International Meshing Roundtable in 2010 and the NSF CyberBridges Workshop in 2012-2014 and has served on numerous program committees in the field.  Suzanne is also an Associate Editor for the Book Series in Medicine by De Gruyter Open. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workshops =&lt;br /&gt;
&lt;br /&gt;
== Parallel Software Runtime System Workshop ==&lt;br /&gt;
&lt;br /&gt;
''' When ''' : 24-25 May 2017&lt;br /&gt;
&lt;br /&gt;
''' Place ''' : NASA/LaRC &amp;amp; NIA&lt;br /&gt;
&lt;br /&gt;
''' Participants ''' : Pete Beckman (ANL), Halim Amer (ANL), Dana P. Hammond (NASA LaRC), Nikos Chrisochoides (ODU), Andriy Kot (NCSA,UIUC), Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU)&lt;br /&gt;
&lt;br /&gt;
== Isotropic Advancing Front Local Reconnection Hands-On Workshop ==&lt;br /&gt;
Attendants: NASA/LaRC : Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond, ODU: Nikos Chrisochoides,  Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When: March 20-21, 2015&lt;br /&gt;
&lt;br /&gt;
== HPC Middleware for Mesh Generation and High Order Geometry Approximation Workshop ==&lt;br /&gt;
Attendants : (NASA/LaRC) Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond,(NIA)  Boris Diskin ODU: Nikos Chrisochoides&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; ''' Dr. Navamita Ray ''' &amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Los Alamos National Laboratory, Mathematics and Computer Science Division &lt;br /&gt;
&lt;br /&gt;
:Los Alamos, New Mexico&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 25,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Towards Scalable Framework for Geometry and Meshing in Scientific Computing &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:navamitaray@gmail.com navamitaray@gmail.com]&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
:High fidelity computational modeling of complex, coupled physical phenomena occurring in several scientific fields require accurate resolution of intricate geometry features, generation of good quality unstructured meshes that minimize modeling errors, scalable interfaces to load/manipulate/traverse these meshes in memory and support I/O for check-pointing and in-situ visualization. While several applications tend to create custom HPC solutions to tackle the heterogeneous descriptions of physical models, such approaches lack in generality, interoperability and extensibility making it difficult to maintain scalability of the individual representations. In this talk, we introduce the component-based open-source '''SIGMA''' (Scalable Interfaces for Geometry and Mesh based Applications) toolkit, an effort to address these issues. We focus particularly on its array-based unstructured mesh representation component, Mesh Oriented datABase ('''MOAB''') that provides scalable interfaces to geometry, mesh and solvers to allow seamless integration to computational workflows. &lt;br /&gt;
:[[File: Navamita.jpg|frameless|left|120px]]Based on the three fundamental units consisting of 1) compact array-based memory management for mesh and field data,2) efficient mesh data structures for traverals and querying, and 3) scalable parallel communication algorithms for distributed meshes, MOAB supports various advanced algorithms such as I/O, in-memory mesh modification and refinement, multi-mesh projections, high-order boundary reconstruction, etc. We discuss some of these advanced algorithms and their applications.&lt;br /&gt;
&lt;br /&gt;
:'''Bio''': Dr. Navamita Ray is a postdoctoral appointee and part of the SIGMA team at Mathematics and Computer Science Division at Argonne National Laboratory, Argonne, IL. She has been involved in research on flexible mesh data structures for mesh adaptivity as well as high-fidelity discrete boundary representation. Dr. Ray holds a Ph.D. in Applied Mathematics from the Stony Brook University, where she did graduate work on high-order surface reconstruction and its applications to surface integrals and remeshing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; '''Dr. Xiangmin (Jim) Jiao '''&amp;lt;/u&amp;gt;&lt;br /&gt;
:Associate Professor and AMS Ph.D. Program Director, Department of Applied Mathematics and Statistics and Institute for Advanced Computational Science&lt;br /&gt;
&lt;br /&gt;
:Stony Brook University&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 3,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Robust Adaptive High-Order Geometric and Numerical Methods Based on Weighted Least Squares &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:xiangmin.jiao@stonybrook.edu xiangmin.jiao@stonybrook.edu]:&lt;br /&gt;
&lt;br /&gt;
:'''Homepage''': http://www.ams.sunysb.edu/~jiao&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
:Numerical solutions of partial differential equations (PDEs) are important for modeling and simulations in many scientific and engineering applications. Their solutions over complex geometries pose significant challenges in efficient surface and volume mesh generation and robust numerical discretizations. In this talk, we present our recent work in tackling these challenges from two aspects. First, we will present accurate and robust high-order geometric algorithms on discrete surface, to support high-order surface reconstruction, surface mesh generation and adaptation, and computation of differential geometric operators, without the need to access the CAD models. Secondly, we present some new numerical discretization techniques, including a generalized finite element method based on adaptive extended stencils,and a novel essentially nonoscillatory scheme for hyperbolic conservation laws on unstructured meshes. These new discretizations are more tolerant of mesh quality and allow accurate, stable and efficient computations even on meshes with poorly shaped elements. Based on a unified theoretical framework of weighted least squares, these techniques can significantly simplify the mesh generation processes, especially on supercomputers, and also enable more efficient and robust numerical computations. We will present the theoretical foundation of our methods and demonstrate their applications for mesh generation and numerical solutions of PDEs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
:[[File: Collaborator Jiao.jpg|frameless|left|100px]]&lt;br /&gt;
:'''Bio''': Dr. Xiangmin (Jim) Jiao is an Associated Professor in Applied Mathematics and Computer Science, and also a core faculty member of the Institute for Advanced Computational Science at Stony Brook University. He received his Ph.D. in Computer Science in 2001 from University of Illinois at Urbana-Champaign (UIUC). He was a Research Scientist at the Center for Simulation of Advanced Rockets (CSAR) at UIUC between 2001 and 2005, and then an Assistant Professor in College of Computing at Georgia Institute of Technology between 2005 and 2007. His research interests focus on high-performance geometric and numerical computing, including applied computational and differential geometry, generalized finite difference and finite element methods, multigrid and iterative methods for sparse linear systems, multiphysics coupling, and problem solving environments, with applications in computational fluid dynamics, structural mechanics, biomedical engineering, climate modeling, etc.   &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CNF Imaging Workshop ==&lt;br /&gt;
&lt;br /&gt;
:'''When''': August 2019&lt;br /&gt;
&lt;br /&gt;
:'''Where''': tbd&lt;br /&gt;
&lt;br /&gt;
:'''More Information''': [[CNF_Imaging_Workshop | CNF Imaging Workshop ]]&lt;br /&gt;
&lt;br /&gt;
= Outreach =&lt;br /&gt;
&lt;br /&gt;
== Surgical Planning Lab ==&lt;br /&gt;
''' When ''' : April 8 &amp;amp; 9 , 2016&lt;br /&gt;
''' Where ''' : Brigham and Women's Hospital &amp;amp; Harvard Medical School, Boston&lt;br /&gt;
&lt;br /&gt;
Posters presented in 25th anniversary of SPL : &lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_CBC3D.pdf Lattice-Based Multi-Tissue Mesh Generation for Biomedical Applications]&lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_NRR.pdf Deformable Registration of Pre-Op MRI with iMRI for Brain Tumor Resection: Progress Report]&lt;br /&gt;
&lt;br /&gt;
Nikos Chrisochoides, Andrey Chernikov and Christos Tsolakis : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_Telescopic.pdf Extreme Scale Mesh Generation for Big-Data Medical Images]&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4033</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4033"/>
				<updated>2019-10-08T22:58:15Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Dimitrios Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2:15pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is bringing together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science field at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot;, (ii) NASA's LaRC which has a long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA), and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods.&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed at CRTC for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. See '''[[Events#Professor_Dimitrios_S._Nikolopoulos | Abstract]]''' for more information about his talk at the workshop.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4032</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4032"/>
				<updated>2019-10-08T22:57:38Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Dimitrios Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2:15pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is bringing together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science field at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot;, (ii) NASA's LaRC which has a long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA), and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods.&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed at CRTC for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. See '''[[Events#Professor_Dimitrios_S._Nikolopoulos | Abstract]]''' for more information on his talk at the workshop.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4031</id>
		<title>Events</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4031"/>
				<updated>2019-10-08T22:56:48Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Professor Dimitrios S. Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= CS Seminars =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
'''Date:''' October 10, 2019&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries &lt;br /&gt;
&lt;br /&gt;
'''Abstract:'''&lt;br /&gt;
To address scaling limitations of future hardware, computing systems turned to parallelism and distribution. Most of the software and applications in science and engineering, but also applications that we use in our daily lives are actually distributed programs with some components running on edge or IoT devices to serve clients, data collectors or actuators, and other components running on data centers to provide data analytics, simulation, or visualization. The disaggregation of computing services raises new challenges for system challenges. We explores two of these challenges in this talk and discuss some solutions. The first challenge is that many applications necessitate low latency and more analytical power at or near the data sources. We demonstrate a system called TAPAS, which is neural network architecture search exploration engine. TAPAS uses aggressive compression, approximation and learning techniques to avoid entirely the simulation process in exploring neural network architectures. It further uses learning methods to adapt immediately to unseen data sets. TAPAS  runs on a single low-power GPU and can train over 1,000 networks per second. This makes TAPAS suitable for training machine learning models on edge devices with limited resources. The second challenge is the one of scaling the performance and energy-efficiency of the hardware used in the Cloud and the Edge beyond current boundaries. We explore a co-designed compiler/OS/firmware system for characterizing hardware operating boundaries and safely operating hardware outside those boundaries to gain performance at the expense of additional, yet infrequent errors and mitigating actions. We demonstrate that many applications are inherently resilient to extended hardware boundaries and indeed benefit substantially from boundary relaxation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|frameless|left|200px]]&lt;br /&gt;
'''Bio''': Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors. He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. See '''[[ CNF_HPC_Workshop | CNF HPC Workshop]]''' for more information about the workshop.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prof. Anastasia Angelopoulou ==&lt;br /&gt;
'''Date:''' TBD, 2020&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Serious Games and Simulations: applications, challenges and future directions &lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' Serious games and simulations have been steadily increasing their&lt;br /&gt;
use in many sectors of society, particularly in education, defense, science and health. Their main purpose is usually to educate or train the users. In this talk, I will present my work in the area of serious games and simulations for training. I will also discuss challenges in the serious games development and future directions to overcome them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Anastasia.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Anastasia Angelopoulou is an Assistant Professor in Simulation and Gaming at the TSYS School of Computer Science at Columbus State University (CSU). Prior to joining CSU, she was a postdoctoral associate at the Institute for Simulation and Training at University of Central Florida (2016-2018), where she obtained her MSc and PhD in Modeling and Simulation (2015). Her research interests lie in the areas of modeling and simulation and serious games and their applications in domains such as healthcare, military, energy, and education, among others. Her research work has been partially supported by the Office of Naval Research and the National Science Foundation (NSF). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dr. Daniele Panozzo ==&lt;br /&gt;
'''Date:''' TBD, 2020 &lt;br /&gt;
&lt;br /&gt;
'''Title:''' Black-Box Analysis&lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' The numerical solution of partial differential equations (PDE) is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering. Ideally, a PDE solver should be a “black box”: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. &lt;br /&gt;
&lt;br /&gt;
I will present an integrated pipeline, considering meshing and element design as a single challenge, that makes the tradeoff between mesh quality and element complexity/cost local, instead of making an a priori decision for the whole pipeline. I will demonstrate that tackling the two problems jointly offers many advantages, and that a fully black-box meshing and analysis solution is already possible for heat transfer and elasticity problems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Daniele.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Dr. Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). Daniele earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). He received the EUROGRAPHICS Young Researcher Award in 2015 and the NSF CAREER Award in 2017. Daniele is leading the development of libigl (https://github.com/libigl/libigl), an award-winning (EUROGRAPHICS Symposium of Geometry Processing Software Award, 2015) open-source geometry processing library that supports academic and industrial research and practice. Daniele is chairing the Graphics Replicability Stamp (http://www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results. His research interests are in digital fabrication, geometry processing, architectural geometry, and discrete differential geometry.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Visitors =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
School of Electronics, Electrical Engineering and Computer Science  &lt;br /&gt;
&lt;br /&gt;
Queen's University of Belfast, UK&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov 12,2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': New Approaches to Energy-Efficient and Resilient HPC  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:d.nikolopoulos@qub.ac.uk d.nikolopoulos@qub.ac.uk]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cs.qub.ac.uk/~D.Nikolopoulos/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Nikolopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios S. Nikolopoulos is Professor in the School of EEECS, at Queen's University of Belfast and a Royal Society Wolfson Research Fellow. He holds the Chair in High Performance and Distributed Computing and directs the HPDC Research Cluster, a team of 20 academic and research staff. His research explores scalable computing systems for data-driven applications and new computing paradigms at the limits of performance, power and reliability. Dimitrios received the NSF CAREER Award, the DOE CAREER Award, and the IBM Faculty Award during an eight-year tenure in the United States. He has also been awarded the SFI-DEL Investigator Award, a Marie Curie Fellowship, a HiPEAC Fellowship, and seven Best Paper Awards including some from the leading IEEE and ACM conferences in HPC, such as SC, PPoPP, and IPDPS. His research has produced over 150 top-tier outputs and has received extensive (£10.6m as PI/£39.5m as CoI) and highly competitive research funding from the NSF, DOE, EPSRC, SFI, DEL, Royal Academy of Engineering, Royal Society, European Commission and private sector. Dimitrios is a Fellow of the British Computer Society, Senior Member of the IEEE and Senior Member of the ACM. He earned a PhD (2000) in Computer Engineering and Informatics from the University of Patras. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Lieber, Baruch Barry ==&lt;br /&gt;
Department of Neurosurgery  &lt;br /&gt;
&lt;br /&gt;
Stony Brook University&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov. 6, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:Baruch.Lieber@stonybrookmedicine.edu Baruch.Lieber@stonybrookmedicine.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://neuro.stonybrookmedicine.edu/about/faculty/lieber &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: LieberB.jpg|frameless|left|125px |'''Professor  Lieber, Baruch Barry''' ]]&lt;br /&gt;
'''Bio:'''  Barry Lieber attended Tel-Aviv University and received a B.Sc. in Mechanical Engineering in 1979. He then attended Georgia Tech and received M.Sc. in 1982 and a Ph.D. in 1985, both in Aerospace Engineering Ph.D. working with Dr. Don P. Giddens. Barry Lieber was a Postdoctoral Fellow from 1985-1987 at the Department of Mechanical Engineering at Georgia Tech and also completed a summer fellowship at Imperial College London in 1986. In 1987 Barry Lieber joined the faculty of the Department of Mechanical and Aerospace Engineering at the State University of New York at Buffalo as Assistant Professor. In 1993 he was promoted to the rank of Associate Professor with tenure and in 1998 was promoted to full professor. In 1994 he became Research Professor of Neurosurgery and in 1997 he became the Director of the Center for Bioengineering at the State University of New York at Buffalo, both position he held until his departure from the university in 2001 to Join the University of Miami as professor in the Department of Biomedical Engineering with a joined appointment in the Department of Radiology. In 2010 he joined the State University of New York at Stony Brook at the rank of professor in the department of Neurosurgery and also serves as program faculty in the department of Biomedical Engineering. Barry Lieber was elected as fellow of the American Institute for Medical and Biomedical Engineering in 1999. He was elected as fellow of the American Society of mechanical Engineers in 2005 and served as the Chairman of the Division of Bioengineering of the American Society of Mechanical Engineers in 2009. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Marek Behr ==&lt;br /&gt;
Chair for Computational Analysis of Technical &lt;br /&gt;
&lt;br /&gt;
RWTH Aachen University&lt;br /&gt;
&lt;br /&gt;
Systems, Schinkelstr. 2, 52062 Aachen, Germany&lt;br /&gt;
&lt;br /&gt;
'''When''': July 31, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Enhanced Surface Definition in Moving-Boundary Flow Simulation&lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:behr@cats.rwth-aachen.de behr@cats.rwth-aachen.de]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cats.rwth-aachen.de&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.&lt;br /&gt;
&lt;br /&gt;
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.&lt;br /&gt;
&lt;br /&gt;
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.&lt;br /&gt;
&lt;br /&gt;
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].&lt;br /&gt;
&lt;br /&gt;
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, &amp;quot;Numerical shape optimization as an approach to extrusion die design&amp;quot;, Finite Elements in Analysis and Design, 61, 35–43 (2012).&lt;br /&gt;
&lt;br /&gt;
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, &amp;quot;NURBS-Enhanced Finite Element Method (NEFEM)&amp;quot;, International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).&lt;br /&gt;
&lt;br /&gt;
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, &amp;quot;On the Usage of NURBS as Interface Representation in Free-Surface Flows&amp;quot;, International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).&lt;br /&gt;
&lt;br /&gt;
[4] M. Behr, &amp;quot;Simplex Space-Time Meshes in Finite Element Simulations&amp;quot;, International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_Marek_Behr1.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio:''' Prof. Marek Behr obtained his Bachelor's and Ph.D. degrees in Aerospace Engineering and Mechanics form the University of Minnesota in Minneapolis. After faculty appointments at the University of Minnesota and at Rice University in Houston, he was appointed in 2004 as a Professor of Mechanical Engineering and holder of the Chair for Computational Analysis of Technical Systems at the RWTH Aachen University. Since 2006, he is the Scientific Director of the Aachen Institute for Advanced Study in Computational Engineering Science, focusing on inverse problems in engineering and funded in the framework of the Excellence Initiative in Germany. Behr advises or has advised over 40 doctoral students, and has published over 65 refereed journal articles and a similar number of conference publications and book chapters. Behr is one of the main developers of the stabilized space-time finite element formulation for deforming-domain flow problems, which has been recently extended to unstructured space-time meshes. He is a long-time expert on parallel computation and large-scale flow simulations and on numerical methods for non-Newtonian fluids. He is a member of several advisory and editorial boards of international journals, and the member of the executive council of the German Association for Computational Mechanics and of the general council of the International Association for Computational Mechanics. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Christos Antonopoulos ==&lt;br /&gt;
Department of Electrical and Computer Engineering, &lt;br /&gt;
&lt;br /&gt;
University of Thessaly, Greece&lt;br /&gt;
&lt;br /&gt;
'''When''': June 25, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:cda@inf.uth.gr cda@inf.uth.gr]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.inf.uth.gr/~cda&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns.  Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware.  We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Antonopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Yongjie Jessica Zhang ==&lt;br /&gt;
Associate Professor in Mechanical Engineering &amp;amp; Courtesy Appointment in Biomedical Engineering&lt;br /&gt;
&lt;br /&gt;
Carnegie Mellon University&lt;br /&gt;
&lt;br /&gt;
'''When''': April 24, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Image-Based Mesh Generation and Volumetric Spline Modeling for Isogeometric Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:jessicaz@andrew.cmu.edu jessicaz@andrew.cmu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.andrew.cmu.edu/~jessicaz&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
With finite element methods and scanning technologies seeing increased use in many research areas, there is an emerging need for high-fidelity geometric modeling and mesh generation of spatially realistic domains.  In this talk, I will highlight our research in three areas: image-based mesh generation for complicated domains, trivariate spline modeling for isogeometric analysis, as well as biomedical, material sciences and engineering applications. I will first present advances and challenges in image-based geometric modeling and meshing along with a comprehensive computational framework, which integrates image processing, geometric modeling, mesh generation and quality improvement with multi-scale analysis at molecular, cellular, tissue and organ scales. Different from other existing methods, the presented framework supports five unique features: high-fidelity meshing for heterogeneous domains with topology ambiguity resolved; multiscale geometric modeling for biomolecular complexes; automatic all-hexahedral mesh generation with sharp feature preservation; robust quality improvement for non-manifold meshes; and guaranteed-quality meshing. These unique capabilities enable accurate, stable, and efficient mechanics calculation for many biomedicine, materials science and engineering applications.&lt;br /&gt;
&lt;br /&gt;
In the second part of this talk, I will show our latest research on volumetric spline parameterization, which contributes directly to the integration of design and analysis, the root idea of isogeometric analysis. For arbitrary topology objects, we first build a polycube whose topology is equivalent to the input geometry and it serves as the parametric domain for the following trivariate T-spline construction. Boolean operations and geometry skeleton can also be used to preserve surface features. A parametric mapping is then used to build a one-to-one correspondence between the input geometry and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement, and applying templates or truncation mechanism couple with subdivision to handle extraordinary nodes. The parametric mapping method has been further extended to conformal solid T-spline construction with the input surface parameterization preserved and trimming curves handled.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Jessica.jpg|frameless|left|120px]]&lt;br /&gt;
'''Bio''': Yongjie Jessica Zhang is an Associate Professor in Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics, all from Tsinghua University, China, and M.Eng. in Aerospace Engineering and Engineering Mechanics, and Ph.D. in Computational Engineering and Sciences from the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element method, isogeometric analysis and their application in computational biomedicine, material sciences and engineering. She has co-authored over 100 publications in peer-reviewed journals and conference proceedings. She is the recipient of Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, USACM Gallagher Young Investigator Award, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, George Tallman Ladd Research Award, and Donald L. &amp;amp; Rhonda Struminger Faculty Fellow. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor David Marcum ==&lt;br /&gt;
Billie J. Ball Professor and  Chief Scientist&lt;br /&gt;
&lt;br /&gt;
Center for Advanced Vehicular Systems, Mechanical Engineering Department, Mississippi State University&lt;br /&gt;
&lt;br /&gt;
'''When''': March 20, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''':  AFLR Unstructured Meshing  Research Activities at CFD Modeling and Simulation Research at the Center for Advanced Vehicular Systems &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:marcum@cavs.msstate.edu marcum@cavs.msstate.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.me.msstate.edu/faculty/marcum/marcum.html &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Mesh generation and associated geometry preparation are critical aspects of any computational field simulation (CFS) process. In particular the mesh used can have a significant impact on accuracy, effectiveness, and efficiency of the CFS solver. Further, typical users spend a considerable portion of their time for the overall effort on mesh and geometry issues. All of this is particularly critical for CFD applications.  AFLR is an unstructured mesh generator designed with a focus on addressing these issues for complex geometries. It is widely used, readily available to Government and Academic users, and has been very successful with relevant problems. AFLR volume and surface meshing is also directly incorporated in several systems, including: DoD CREATE-MG Capstone, Lockheed Martin/DoD ACAD, Boeing MADCAP, MSU SolidMesh, and Altair HyperMesh. In this talk we will provide an overview of this technology, future directions, and plans for multi-tasking/parallel operation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Marcum David.jpg|frameless|left|125px]]&lt;br /&gt;
'''Bio''': Dr. Marcum is Professor of Mechanical Engineering at Mississippi State University (MSU) and Chief Scientist for CFD within the Center for Advanced Vehicular Systems (CAVS). He has 30 years of experience in development of CFD and unstructured grid technology. Before joining MSU in 1991, Dr. Marcum was a Scientist and Senior Engineer at McDonnell Douglas Research Laboratories and Boeing Commercial Airplane Company. He received his Ph.D. from Purdue University in 1985. Prior to that he was a Senior Engineer from 1978 through 1983 at TRW Ross Gear Division. At MSU, Dr. Marcum served as Thrust Leader and Director of the NSF ERC for Computational Field Simulation. As Director, he led the transition from graduated NSF ERC to its current form as the High Performance Computing Collaboratory (HPC²). Dr. Marcum also served as Deputy Director and Director of the SimCenter (an HPC² member center and currently merged within CAVS). He is currently Chief Scientist for CFD within CAVS (also an HPC² member center). As Chief Scientist for CFD, he is directly involved in the research activities of a team of multi-disciplinary researchers working on CFD related projects for DoD, DoE, NASA, NSF, and industry. Computational tools produced by these projects at MSU within the ERC, SimCenter and CAVS, and in particular Dr. Marcum’s AFLR unstructured mesh generator, are in use throughout aerospace, automotive and DoD organizations. Dr. Marcum is widely recognized for his contributions to unstructured grid technology and is currently Honorary Professor at University of Wales, Swansea, UK and a previous Invited Professor at INRIA, Paris-Rocquencourt, France. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Kyle Gallivan ==&lt;br /&gt;
Professor Mathematics Department&lt;br /&gt;
&lt;br /&gt;
Florida State University&lt;br /&gt;
&lt;br /&gt;
'''When''': January 23, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Riemannian Optimization for Elastic Shape Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:kgallivan@fsu.edu kgallivan@fsu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.math.fsu.edu/~gallivan/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
In elastic shape analysis, a representation of a shape is invariant to translation, scaling, rotation and reparameterization and important problems (such as computing the distance and geodesic between two curves, the mean of a set of curves, and other statistical analyses) require finding a best rotation and re-parameterization between two curves. In this talk, I focus on this key subproblem and study different tools for optimizations on the joint group of rotations and re-parameterizations. I will give a brief account of a novel Riemannian optimization approach and evaluate its use in computing the distance between two curves and classification using two public data sets. Experiments show significant advantages in computational time and reliability in performance compared to the current state-of-the-art method.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_marcum.jpg|frameless|left|250px]]&lt;br /&gt;
'''Bio''': Kyle A. Gallivan is a Professor of Mathematics at Florida State University. Gallivan received the Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1983 under the direction of C. W. Gear. He worked on special purpose signal processors in the Government Aerospace Systems Division of Harris Corporation.  He was a research computer scientist at the Center for Supercomputing Research and Development at the University of Illinois from 1985 until 1993 when he moved to the Department of Electrical and Computer Engineering. From 1997 to 2008 he was a member of the Department of Computer Science at Florida State University (FSU) and a member of the Computational Science and Engineering group becoming a full Professor in 1999. He became a Professor of Mathematics at FSU in 2008 and was selected the 2012 Pascal Professor for the Faculty of Sciences of the University of Leiden in the Netherlands. He has been a Visiting Professor at the Catholic University of Louvain in Belgium multiple times through a long-standing research collaboration with colleagues there.&lt;br /&gt;
&lt;br /&gt;
Over the years Gallivan's research has included: design and analysis of high-performance numerical algorithms, pioneering work on block algorithms for numerical linear algebra, performance analysis of the experimental Cedar system, restructuring compilers, model reduction of large scale differential equations, and high-performance codes for application such as ocean circulation, circuit simulation and the codes in the Perfect Benchmark Suite. Gallivan's current main research concerns optimization algorithms on Riemannian manifolds and their use in applications such as shape analysis, statistics, and signal/image processing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Suzanne M. Shontz ==&lt;br /&gt;
Department of Electrical Engineering and Computer Science&lt;br /&gt;
&lt;br /&gt;
University of Kansas&lt;br /&gt;
&lt;br /&gt;
'''When''': November 7, 2014, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''':E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': A parallel log barrier for mesh quality improvement and updating &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:shontz@ku.edu shontz@ku.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://people.eecs.ku.edu/~shontz/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
There are numerous applications in science, engineering, and medicine which require high-quality meshes, i.e., discretizations of the geometry, for use in computational simulations.  For example, meshes have been used to enable accurate prediction of the performance, reliability, and safety of solid propellant rockets.  The movie industry in Hollywood typically employs dynamic meshes in order to animate characters in films.  Large-scale applications often require meshes with millions to billions of elements that are generated and manipulated in parallel.  The advent of supercomputers with hundreds to thousands of cores has made this possible.&lt;br /&gt;
&lt;br /&gt;
The focus of my talk will be on parallel algorithms for mesh quality improvement and mesh untangling.  Such algorithms are needed, for example, when a large-scale mesh deformation is applied and tangled and/or low quality meshes are the result.  Prior efforts in these areas have focused on the development of parallel algorithms for mesh generation and local mesh quality improvement in which only one vertex is moved at a time.  In contrast, we are concerned with the development of parallel global algorithms for mesh quality improvement and untangling in which all vertices are moved simultaneously. I will present our parallel log-barrier mesh quality improvement and untangling algorithms for distributed-memory machines.  Our algorithms simultaneously move the mesh vertices in order to optimize a log-barrier objective function that was designed to improve the quality of the worst quality mesh elements. We employ an edge coloring-based algorithm for synchronizing unstructured communication among the processes executing the log-barrier mesh optimization algorithm.  The main contribution of this work is a generic scheme for global mesh optimization.  The algorithm shows greater strong scaling efficiency compared to an existing parallel mesh quality improvement technique. Portions of this talk represent joint work with Shankar Prasad Sastry, University of Utah, and Stephen Vavasis, University of Waterloo.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_shontz.jpg|frameless|left|150px]]&lt;br /&gt;
'''Bio''':Suzanne M. Shontz is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Kansas. She is also affiliated with the Graduate Program in Bioengineering and the Information and Telecommunication Technology Center.  Prior to joining the University of Kansas in 2014, Suzanne was on the faculty at Mississippi State and Pennsylvania State Universities.  She was also a postdoc at the University of Minnesota and earned her Ph.D. in Applied Mathematics from Cornell University.&lt;br /&gt;
&lt;br /&gt;
Suzanne's research efforts focus centrally on parallel scientific computing, more specifically, the design and analysis of unstructured mesh, numerical optimization, model order reduction, and numerical linear algebra algorithms and their applications to medicine, images, electronic circuits, materials, and other applications.  In 2012, she was awarded an NSF Presidential Early CAREER Award (i.e., NSF PECASE Award) by President Obama for her research in computational- and data-enabled science and engineering.  Suzanne also received an NSF CAREER Award for her research on parallel dynamic meshing algorithms, theory, and software for simulation-assisted medical interventions in 2011 and a Summer Faculty Fellowship from the Office of Naval Research in 2009. She has chaired or co-chaired several top conferences in computational- and data-enabled science and engineering including the International Meshing Roundtable in 2010 and the NSF CyberBridges Workshop in 2012-2014 and has served on numerous program committees in the field.  Suzanne is also an Associate Editor for the Book Series in Medicine by De Gruyter Open. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workshops =&lt;br /&gt;
&lt;br /&gt;
== Parallel Software Runtime System Workshop ==&lt;br /&gt;
&lt;br /&gt;
''' When ''' : 24-25 May 2017&lt;br /&gt;
&lt;br /&gt;
''' Place ''' : NASA/LaRC &amp;amp; NIA&lt;br /&gt;
&lt;br /&gt;
''' Participants ''' : Pete Beckman (ANL), Halim Amer (ANL), Dana P. Hammond (NASA LaRC), Nikos Chrisochoides (ODU), Andriy Kot (NCSA,UIUC), Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU)&lt;br /&gt;
&lt;br /&gt;
== Isotropic Advancing Front Local Reconnection Hands-On Workshop ==&lt;br /&gt;
Attendants: NASA/LaRC : Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond, ODU: Nikos Chrisochoides,  Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When: March 20-21, 2015&lt;br /&gt;
&lt;br /&gt;
== HPC Middleware for Mesh Generation and High Order Geometry Approximation Workshop ==&lt;br /&gt;
Attendants : (NASA/LaRC) Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond,(NIA)  Boris Diskin ODU: Nikos Chrisochoides&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; ''' Dr. Navamita Ray ''' &amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Los Alamos National Laboratory, Mathematics and Computer Science Division &lt;br /&gt;
&lt;br /&gt;
:Los Alamos, New Mexico&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 25,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Towards Scalable Framework for Geometry and Meshing in Scientific Computing &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:navamitaray@gmail.com navamitaray@gmail.com]&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
:High fidelity computational modeling of complex, coupled physical phenomena occurring in several scientific fields require accurate resolution of intricate geometry features, generation of good quality unstructured meshes that minimize modeling errors, scalable interfaces to load/manipulate/traverse these meshes in memory and support I/O for check-pointing and in-situ visualization. While several applications tend to create custom HPC solutions to tackle the heterogeneous descriptions of physical models, such approaches lack in generality, interoperability and extensibility making it difficult to maintain scalability of the individual representations. In this talk, we introduce the component-based open-source '''SIGMA''' (Scalable Interfaces for Geometry and Mesh based Applications) toolkit, an effort to address these issues. We focus particularly on its array-based unstructured mesh representation component, Mesh Oriented datABase ('''MOAB''') that provides scalable interfaces to geometry, mesh and solvers to allow seamless integration to computational workflows. &lt;br /&gt;
:[[File: Navamita.jpg|frameless|left|120px]]Based on the three fundamental units consisting of 1) compact array-based memory management for mesh and field data,2) efficient mesh data structures for traverals and querying, and 3) scalable parallel communication algorithms for distributed meshes, MOAB supports various advanced algorithms such as I/O, in-memory mesh modification and refinement, multi-mesh projections, high-order boundary reconstruction, etc. We discuss some of these advanced algorithms and their applications.&lt;br /&gt;
&lt;br /&gt;
:'''Bio''': Dr. Navamita Ray is a postdoctoral appointee and part of the SIGMA team at Mathematics and Computer Science Division at Argonne National Laboratory, Argonne, IL. She has been involved in research on flexible mesh data structures for mesh adaptivity as well as high-fidelity discrete boundary representation. Dr. Ray holds a Ph.D. in Applied Mathematics from the Stony Brook University, where she did graduate work on high-order surface reconstruction and its applications to surface integrals and remeshing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; '''Dr. Xiangmin (Jim) Jiao '''&amp;lt;/u&amp;gt;&lt;br /&gt;
:Associate Professor and AMS Ph.D. Program Director, Department of Applied Mathematics and Statistics and Institute for Advanced Computational Science&lt;br /&gt;
&lt;br /&gt;
:Stony Brook University&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 3,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Robust Adaptive High-Order Geometric and Numerical Methods Based on Weighted Least Squares &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:xiangmin.jiao@stonybrook.edu xiangmin.jiao@stonybrook.edu]:&lt;br /&gt;
&lt;br /&gt;
:'''Homepage''': http://www.ams.sunysb.edu/~jiao&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
:Numerical solutions of partial differential equations (PDEs) are important for modeling and simulations in many scientific and engineering applications. Their solutions over complex geometries pose significant challenges in efficient surface and volume mesh generation and robust numerical discretizations. In this talk, we present our recent work in tackling these challenges from two aspects. First, we will present accurate and robust high-order geometric algorithms on discrete surface, to support high-order surface reconstruction, surface mesh generation and adaptation, and computation of differential geometric operators, without the need to access the CAD models. Secondly, we present some new numerical discretization techniques, including a generalized finite element method based on adaptive extended stencils,and a novel essentially nonoscillatory scheme for hyperbolic conservation laws on unstructured meshes. These new discretizations are more tolerant of mesh quality and allow accurate, stable and efficient computations even on meshes with poorly shaped elements. Based on a unified theoretical framework of weighted least squares, these techniques can significantly simplify the mesh generation processes, especially on supercomputers, and also enable more efficient and robust numerical computations. We will present the theoretical foundation of our methods and demonstrate their applications for mesh generation and numerical solutions of PDEs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
:[[File: Collaborator Jiao.jpg|frameless|left|100px]]&lt;br /&gt;
:'''Bio''': Dr. Xiangmin (Jim) Jiao is an Associated Professor in Applied Mathematics and Computer Science, and also a core faculty member of the Institute for Advanced Computational Science at Stony Brook University. He received his Ph.D. in Computer Science in 2001 from University of Illinois at Urbana-Champaign (UIUC). He was a Research Scientist at the Center for Simulation of Advanced Rockets (CSAR) at UIUC between 2001 and 2005, and then an Assistant Professor in College of Computing at Georgia Institute of Technology between 2005 and 2007. His research interests focus on high-performance geometric and numerical computing, including applied computational and differential geometry, generalized finite difference and finite element methods, multigrid and iterative methods for sparse linear systems, multiphysics coupling, and problem solving environments, with applications in computational fluid dynamics, structural mechanics, biomedical engineering, climate modeling, etc.   &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CNF Imaging Workshop ==&lt;br /&gt;
&lt;br /&gt;
:'''When''': August 2019&lt;br /&gt;
&lt;br /&gt;
:'''Where''': tbd&lt;br /&gt;
&lt;br /&gt;
:'''More Information''': [[CNF_Imaging_Workshop | CNF Imaging Workshop ]]&lt;br /&gt;
&lt;br /&gt;
= Outreach =&lt;br /&gt;
&lt;br /&gt;
== Surgical Planning Lab ==&lt;br /&gt;
''' When ''' : April 8 &amp;amp; 9 , 2016&lt;br /&gt;
''' Where ''' : Brigham and Women's Hospital &amp;amp; Harvard Medical School, Boston&lt;br /&gt;
&lt;br /&gt;
Posters presented in 25th anniversary of SPL : &lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_CBC3D.pdf Lattice-Based Multi-Tissue Mesh Generation for Biomedical Applications]&lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_NRR.pdf Deformable Registration of Pre-Op MRI with iMRI for Brain Tumor Resection: Progress Report]&lt;br /&gt;
&lt;br /&gt;
Nikos Chrisochoides, Andrey Chernikov and Christos Tsolakis : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_Telescopic.pdf Extreme Scale Mesh Generation for Big-Data Medical Images]&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4030</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4030"/>
				<updated>2019-10-08T22:52:18Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2:15pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is bringing together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science field at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot;, (ii) NASA's LaRC which has a long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA), and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods.&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed at CRTC for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. For more information on his talk: '''[[Events#Professor_Dimitrios_S._Nikolopoulos | Abstract]]'''&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4029</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4029"/>
				<updated>2019-10-08T22:41:31Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Dimitrios Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2:15pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is bringing together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot; (ii) NASA's LaRC which has long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA) and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed at CRTC for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. For more information on his talk: '''[[Events#Professor_Dimitrios_S._Nikolopoulos | Abstract]]'''&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4028</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4028"/>
				<updated>2019-10-08T22:40:40Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Dimitrios Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2:15pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is bringing together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot; (ii) NASA's LaRC which has long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA) and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed at CRTC for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. See more information on his talk: '''[[Events#Professor_Dimitrios_S._Nikolopoulos | Abstract]]'''&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4027</id>
		<title>Events</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4027"/>
				<updated>2019-10-08T22:32:16Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Professor Dimitrios S. Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= CS Seminars =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
'''Date:''' October 10, 2019&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries &lt;br /&gt;
&lt;br /&gt;
'''Abstract:'''&lt;br /&gt;
To address scaling limitations of future hardware, computing systems turned to parallelism and distribution. Most of the software and applications in science and engineering, but also applications that we use in our daily lives are actually distributed programs with some components running on edge or IoT devices to serve clients, data collectors or actuators, and other components running on data centers to provide data analytics, simulation, or visualization. The disaggregation of computing services raises new challenges for system challenges. We explores two of these challenges in this talk and discuss some solutions. The first challenge is that many applications necessitate low latency and more analytical power at or near the data sources. We demonstrate a system called TAPAS, which is neural network architecture search exploration engine. TAPAS uses aggressive compression, approximation and learning techniques to avoid entirely the simulation process in exploring neural network architectures. It further uses learning methods to adapt immediately to unseen data sets. TAPAS  runs on a single low-power GPU and can train over 1,000 networks per second. This makes TAPAS suitable for training machine learning models on edge devices with limited resources. The second challenge is the one of scaling the performance and energy-efficiency of the hardware used in the Cloud and the Edge beyond current boundaries. We explore a co-designed compiler/OS/firmware system for characterizing hardware operating boundaries and safely operating hardware outside those boundaries to gain performance at the expense of additional, yet infrequent errors and mitigating actions. We demonstrate that many applications are inherently resilient to extended hardware boundaries and indeed benefit substantially from boundary relaxation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|frameless|left|200px]]&lt;br /&gt;
'''Bio''': Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors. He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prof. Anastasia Angelopoulou ==&lt;br /&gt;
'''Date:''' TBD, 2020&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Serious Games and Simulations: applications, challenges and future directions &lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' Serious games and simulations have been steadily increasing their&lt;br /&gt;
use in many sectors of society, particularly in education, defense, science and health. Their main purpose is usually to educate or train the users. In this talk, I will present my work in the area of serious games and simulations for training. I will also discuss challenges in the serious games development and future directions to overcome them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Anastasia.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Anastasia Angelopoulou is an Assistant Professor in Simulation and Gaming at the TSYS School of Computer Science at Columbus State University (CSU). Prior to joining CSU, she was a postdoctoral associate at the Institute for Simulation and Training at University of Central Florida (2016-2018), where she obtained her MSc and PhD in Modeling and Simulation (2015). Her research interests lie in the areas of modeling and simulation and serious games and their applications in domains such as healthcare, military, energy, and education, among others. Her research work has been partially supported by the Office of Naval Research and the National Science Foundation (NSF). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dr. Daniele Panozzo ==&lt;br /&gt;
'''Date:''' TBD, 2020 &lt;br /&gt;
&lt;br /&gt;
'''Title:''' Black-Box Analysis&lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' The numerical solution of partial differential equations (PDE) is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering. Ideally, a PDE solver should be a “black box”: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. &lt;br /&gt;
&lt;br /&gt;
I will present an integrated pipeline, considering meshing and element design as a single challenge, that makes the tradeoff between mesh quality and element complexity/cost local, instead of making an a priori decision for the whole pipeline. I will demonstrate that tackling the two problems jointly offers many advantages, and that a fully black-box meshing and analysis solution is already possible for heat transfer and elasticity problems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Daniele.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Dr. Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). Daniele earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). He received the EUROGRAPHICS Young Researcher Award in 2015 and the NSF CAREER Award in 2017. Daniele is leading the development of libigl (https://github.com/libigl/libigl), an award-winning (EUROGRAPHICS Symposium of Geometry Processing Software Award, 2015) open-source geometry processing library that supports academic and industrial research and practice. Daniele is chairing the Graphics Replicability Stamp (http://www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results. His research interests are in digital fabrication, geometry processing, architectural geometry, and discrete differential geometry.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Visitors =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
School of Electronics, Electrical Engineering and Computer Science  &lt;br /&gt;
&lt;br /&gt;
Queen's University of Belfast, UK&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov 12,2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': New Approaches to Energy-Efficient and Resilient HPC  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:d.nikolopoulos@qub.ac.uk d.nikolopoulos@qub.ac.uk]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cs.qub.ac.uk/~D.Nikolopoulos/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Nikolopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios S. Nikolopoulos is Professor in the School of EEECS, at Queen's University of Belfast and a Royal Society Wolfson Research Fellow. He holds the Chair in High Performance and Distributed Computing and directs the HPDC Research Cluster, a team of 20 academic and research staff. His research explores scalable computing systems for data-driven applications and new computing paradigms at the limits of performance, power and reliability. Dimitrios received the NSF CAREER Award, the DOE CAREER Award, and the IBM Faculty Award during an eight-year tenure in the United States. He has also been awarded the SFI-DEL Investigator Award, a Marie Curie Fellowship, a HiPEAC Fellowship, and seven Best Paper Awards including some from the leading IEEE and ACM conferences in HPC, such as SC, PPoPP, and IPDPS. His research has produced over 150 top-tier outputs and has received extensive (£10.6m as PI/£39.5m as CoI) and highly competitive research funding from the NSF, DOE, EPSRC, SFI, DEL, Royal Academy of Engineering, Royal Society, European Commission and private sector. Dimitrios is a Fellow of the British Computer Society, Senior Member of the IEEE and Senior Member of the ACM. He earned a PhD (2000) in Computer Engineering and Informatics from the University of Patras. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Lieber, Baruch Barry ==&lt;br /&gt;
Department of Neurosurgery  &lt;br /&gt;
&lt;br /&gt;
Stony Brook University&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov. 6, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:Baruch.Lieber@stonybrookmedicine.edu Baruch.Lieber@stonybrookmedicine.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://neuro.stonybrookmedicine.edu/about/faculty/lieber &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: LieberB.jpg|frameless|left|125px |'''Professor  Lieber, Baruch Barry''' ]]&lt;br /&gt;
'''Bio:'''  Barry Lieber attended Tel-Aviv University and received a B.Sc. in Mechanical Engineering in 1979. He then attended Georgia Tech and received M.Sc. in 1982 and a Ph.D. in 1985, both in Aerospace Engineering Ph.D. working with Dr. Don P. Giddens. Barry Lieber was a Postdoctoral Fellow from 1985-1987 at the Department of Mechanical Engineering at Georgia Tech and also completed a summer fellowship at Imperial College London in 1986. In 1987 Barry Lieber joined the faculty of the Department of Mechanical and Aerospace Engineering at the State University of New York at Buffalo as Assistant Professor. In 1993 he was promoted to the rank of Associate Professor with tenure and in 1998 was promoted to full professor. In 1994 he became Research Professor of Neurosurgery and in 1997 he became the Director of the Center for Bioengineering at the State University of New York at Buffalo, both position he held until his departure from the university in 2001 to Join the University of Miami as professor in the Department of Biomedical Engineering with a joined appointment in the Department of Radiology. In 2010 he joined the State University of New York at Stony Brook at the rank of professor in the department of Neurosurgery and also serves as program faculty in the department of Biomedical Engineering. Barry Lieber was elected as fellow of the American Institute for Medical and Biomedical Engineering in 1999. He was elected as fellow of the American Society of mechanical Engineers in 2005 and served as the Chairman of the Division of Bioengineering of the American Society of Mechanical Engineers in 2009. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Marek Behr ==&lt;br /&gt;
Chair for Computational Analysis of Technical &lt;br /&gt;
&lt;br /&gt;
RWTH Aachen University&lt;br /&gt;
&lt;br /&gt;
Systems, Schinkelstr. 2, 52062 Aachen, Germany&lt;br /&gt;
&lt;br /&gt;
'''When''': July 31, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Enhanced Surface Definition in Moving-Boundary Flow Simulation&lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:behr@cats.rwth-aachen.de behr@cats.rwth-aachen.de]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cats.rwth-aachen.de&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.&lt;br /&gt;
&lt;br /&gt;
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.&lt;br /&gt;
&lt;br /&gt;
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.&lt;br /&gt;
&lt;br /&gt;
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].&lt;br /&gt;
&lt;br /&gt;
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, &amp;quot;Numerical shape optimization as an approach to extrusion die design&amp;quot;, Finite Elements in Analysis and Design, 61, 35–43 (2012).&lt;br /&gt;
&lt;br /&gt;
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, &amp;quot;NURBS-Enhanced Finite Element Method (NEFEM)&amp;quot;, International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).&lt;br /&gt;
&lt;br /&gt;
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, &amp;quot;On the Usage of NURBS as Interface Representation in Free-Surface Flows&amp;quot;, International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).&lt;br /&gt;
&lt;br /&gt;
[4] M. Behr, &amp;quot;Simplex Space-Time Meshes in Finite Element Simulations&amp;quot;, International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_Marek_Behr1.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio:''' Prof. Marek Behr obtained his Bachelor's and Ph.D. degrees in Aerospace Engineering and Mechanics form the University of Minnesota in Minneapolis. After faculty appointments at the University of Minnesota and at Rice University in Houston, he was appointed in 2004 as a Professor of Mechanical Engineering and holder of the Chair for Computational Analysis of Technical Systems at the RWTH Aachen University. Since 2006, he is the Scientific Director of the Aachen Institute for Advanced Study in Computational Engineering Science, focusing on inverse problems in engineering and funded in the framework of the Excellence Initiative in Germany. Behr advises or has advised over 40 doctoral students, and has published over 65 refereed journal articles and a similar number of conference publications and book chapters. Behr is one of the main developers of the stabilized space-time finite element formulation for deforming-domain flow problems, which has been recently extended to unstructured space-time meshes. He is a long-time expert on parallel computation and large-scale flow simulations and on numerical methods for non-Newtonian fluids. He is a member of several advisory and editorial boards of international journals, and the member of the executive council of the German Association for Computational Mechanics and of the general council of the International Association for Computational Mechanics. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Christos Antonopoulos ==&lt;br /&gt;
Department of Electrical and Computer Engineering, &lt;br /&gt;
&lt;br /&gt;
University of Thessaly, Greece&lt;br /&gt;
&lt;br /&gt;
'''When''': June 25, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:cda@inf.uth.gr cda@inf.uth.gr]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.inf.uth.gr/~cda&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns.  Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware.  We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Antonopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Yongjie Jessica Zhang ==&lt;br /&gt;
Associate Professor in Mechanical Engineering &amp;amp; Courtesy Appointment in Biomedical Engineering&lt;br /&gt;
&lt;br /&gt;
Carnegie Mellon University&lt;br /&gt;
&lt;br /&gt;
'''When''': April 24, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Image-Based Mesh Generation and Volumetric Spline Modeling for Isogeometric Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:jessicaz@andrew.cmu.edu jessicaz@andrew.cmu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.andrew.cmu.edu/~jessicaz&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
With finite element methods and scanning technologies seeing increased use in many research areas, there is an emerging need for high-fidelity geometric modeling and mesh generation of spatially realistic domains.  In this talk, I will highlight our research in three areas: image-based mesh generation for complicated domains, trivariate spline modeling for isogeometric analysis, as well as biomedical, material sciences and engineering applications. I will first present advances and challenges in image-based geometric modeling and meshing along with a comprehensive computational framework, which integrates image processing, geometric modeling, mesh generation and quality improvement with multi-scale analysis at molecular, cellular, tissue and organ scales. Different from other existing methods, the presented framework supports five unique features: high-fidelity meshing for heterogeneous domains with topology ambiguity resolved; multiscale geometric modeling for biomolecular complexes; automatic all-hexahedral mesh generation with sharp feature preservation; robust quality improvement for non-manifold meshes; and guaranteed-quality meshing. These unique capabilities enable accurate, stable, and efficient mechanics calculation for many biomedicine, materials science and engineering applications.&lt;br /&gt;
&lt;br /&gt;
In the second part of this talk, I will show our latest research on volumetric spline parameterization, which contributes directly to the integration of design and analysis, the root idea of isogeometric analysis. For arbitrary topology objects, we first build a polycube whose topology is equivalent to the input geometry and it serves as the parametric domain for the following trivariate T-spline construction. Boolean operations and geometry skeleton can also be used to preserve surface features. A parametric mapping is then used to build a one-to-one correspondence between the input geometry and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement, and applying templates or truncation mechanism couple with subdivision to handle extraordinary nodes. The parametric mapping method has been further extended to conformal solid T-spline construction with the input surface parameterization preserved and trimming curves handled.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Jessica.jpg|frameless|left|120px]]&lt;br /&gt;
'''Bio''': Yongjie Jessica Zhang is an Associate Professor in Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics, all from Tsinghua University, China, and M.Eng. in Aerospace Engineering and Engineering Mechanics, and Ph.D. in Computational Engineering and Sciences from the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element method, isogeometric analysis and their application in computational biomedicine, material sciences and engineering. She has co-authored over 100 publications in peer-reviewed journals and conference proceedings. She is the recipient of Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, USACM Gallagher Young Investigator Award, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, George Tallman Ladd Research Award, and Donald L. &amp;amp; Rhonda Struminger Faculty Fellow. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor David Marcum ==&lt;br /&gt;
Billie J. Ball Professor and  Chief Scientist&lt;br /&gt;
&lt;br /&gt;
Center for Advanced Vehicular Systems, Mechanical Engineering Department, Mississippi State University&lt;br /&gt;
&lt;br /&gt;
'''When''': March 20, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''':  AFLR Unstructured Meshing  Research Activities at CFD Modeling and Simulation Research at the Center for Advanced Vehicular Systems &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:marcum@cavs.msstate.edu marcum@cavs.msstate.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.me.msstate.edu/faculty/marcum/marcum.html &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Mesh generation and associated geometry preparation are critical aspects of any computational field simulation (CFS) process. In particular the mesh used can have a significant impact on accuracy, effectiveness, and efficiency of the CFS solver. Further, typical users spend a considerable portion of their time for the overall effort on mesh and geometry issues. All of this is particularly critical for CFD applications.  AFLR is an unstructured mesh generator designed with a focus on addressing these issues for complex geometries. It is widely used, readily available to Government and Academic users, and has been very successful with relevant problems. AFLR volume and surface meshing is also directly incorporated in several systems, including: DoD CREATE-MG Capstone, Lockheed Martin/DoD ACAD, Boeing MADCAP, MSU SolidMesh, and Altair HyperMesh. In this talk we will provide an overview of this technology, future directions, and plans for multi-tasking/parallel operation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Marcum David.jpg|frameless|left|125px]]&lt;br /&gt;
'''Bio''': Dr. Marcum is Professor of Mechanical Engineering at Mississippi State University (MSU) and Chief Scientist for CFD within the Center for Advanced Vehicular Systems (CAVS). He has 30 years of experience in development of CFD and unstructured grid technology. Before joining MSU in 1991, Dr. Marcum was a Scientist and Senior Engineer at McDonnell Douglas Research Laboratories and Boeing Commercial Airplane Company. He received his Ph.D. from Purdue University in 1985. Prior to that he was a Senior Engineer from 1978 through 1983 at TRW Ross Gear Division. At MSU, Dr. Marcum served as Thrust Leader and Director of the NSF ERC for Computational Field Simulation. As Director, he led the transition from graduated NSF ERC to its current form as the High Performance Computing Collaboratory (HPC²). Dr. Marcum also served as Deputy Director and Director of the SimCenter (an HPC² member center and currently merged within CAVS). He is currently Chief Scientist for CFD within CAVS (also an HPC² member center). As Chief Scientist for CFD, he is directly involved in the research activities of a team of multi-disciplinary researchers working on CFD related projects for DoD, DoE, NASA, NSF, and industry. Computational tools produced by these projects at MSU within the ERC, SimCenter and CAVS, and in particular Dr. Marcum’s AFLR unstructured mesh generator, are in use throughout aerospace, automotive and DoD organizations. Dr. Marcum is widely recognized for his contributions to unstructured grid technology and is currently Honorary Professor at University of Wales, Swansea, UK and a previous Invited Professor at INRIA, Paris-Rocquencourt, France. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Kyle Gallivan ==&lt;br /&gt;
Professor Mathematics Department&lt;br /&gt;
&lt;br /&gt;
Florida State University&lt;br /&gt;
&lt;br /&gt;
'''When''': January 23, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Riemannian Optimization for Elastic Shape Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:kgallivan@fsu.edu kgallivan@fsu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.math.fsu.edu/~gallivan/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
In elastic shape analysis, a representation of a shape is invariant to translation, scaling, rotation and reparameterization and important problems (such as computing the distance and geodesic between two curves, the mean of a set of curves, and other statistical analyses) require finding a best rotation and re-parameterization between two curves. In this talk, I focus on this key subproblem and study different tools for optimizations on the joint group of rotations and re-parameterizations. I will give a brief account of a novel Riemannian optimization approach and evaluate its use in computing the distance between two curves and classification using two public data sets. Experiments show significant advantages in computational time and reliability in performance compared to the current state-of-the-art method.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_marcum.jpg|frameless|left|250px]]&lt;br /&gt;
'''Bio''': Kyle A. Gallivan is a Professor of Mathematics at Florida State University. Gallivan received the Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1983 under the direction of C. W. Gear. He worked on special purpose signal processors in the Government Aerospace Systems Division of Harris Corporation.  He was a research computer scientist at the Center for Supercomputing Research and Development at the University of Illinois from 1985 until 1993 when he moved to the Department of Electrical and Computer Engineering. From 1997 to 2008 he was a member of the Department of Computer Science at Florida State University (FSU) and a member of the Computational Science and Engineering group becoming a full Professor in 1999. He became a Professor of Mathematics at FSU in 2008 and was selected the 2012 Pascal Professor for the Faculty of Sciences of the University of Leiden in the Netherlands. He has been a Visiting Professor at the Catholic University of Louvain in Belgium multiple times through a long-standing research collaboration with colleagues there.&lt;br /&gt;
&lt;br /&gt;
Over the years Gallivan's research has included: design and analysis of high-performance numerical algorithms, pioneering work on block algorithms for numerical linear algebra, performance analysis of the experimental Cedar system, restructuring compilers, model reduction of large scale differential equations, and high-performance codes for application such as ocean circulation, circuit simulation and the codes in the Perfect Benchmark Suite. Gallivan's current main research concerns optimization algorithms on Riemannian manifolds and their use in applications such as shape analysis, statistics, and signal/image processing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Suzanne M. Shontz ==&lt;br /&gt;
Department of Electrical Engineering and Computer Science&lt;br /&gt;
&lt;br /&gt;
University of Kansas&lt;br /&gt;
&lt;br /&gt;
'''When''': November 7, 2014, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''':E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': A parallel log barrier for mesh quality improvement and updating &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:shontz@ku.edu shontz@ku.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://people.eecs.ku.edu/~shontz/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
There are numerous applications in science, engineering, and medicine which require high-quality meshes, i.e., discretizations of the geometry, for use in computational simulations.  For example, meshes have been used to enable accurate prediction of the performance, reliability, and safety of solid propellant rockets.  The movie industry in Hollywood typically employs dynamic meshes in order to animate characters in films.  Large-scale applications often require meshes with millions to billions of elements that are generated and manipulated in parallel.  The advent of supercomputers with hundreds to thousands of cores has made this possible.&lt;br /&gt;
&lt;br /&gt;
The focus of my talk will be on parallel algorithms for mesh quality improvement and mesh untangling.  Such algorithms are needed, for example, when a large-scale mesh deformation is applied and tangled and/or low quality meshes are the result.  Prior efforts in these areas have focused on the development of parallel algorithms for mesh generation and local mesh quality improvement in which only one vertex is moved at a time.  In contrast, we are concerned with the development of parallel global algorithms for mesh quality improvement and untangling in which all vertices are moved simultaneously. I will present our parallel log-barrier mesh quality improvement and untangling algorithms for distributed-memory machines.  Our algorithms simultaneously move the mesh vertices in order to optimize a log-barrier objective function that was designed to improve the quality of the worst quality mesh elements. We employ an edge coloring-based algorithm for synchronizing unstructured communication among the processes executing the log-barrier mesh optimization algorithm.  The main contribution of this work is a generic scheme for global mesh optimization.  The algorithm shows greater strong scaling efficiency compared to an existing parallel mesh quality improvement technique. Portions of this talk represent joint work with Shankar Prasad Sastry, University of Utah, and Stephen Vavasis, University of Waterloo.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_shontz.jpg|frameless|left|150px]]&lt;br /&gt;
'''Bio''':Suzanne M. Shontz is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Kansas. She is also affiliated with the Graduate Program in Bioengineering and the Information and Telecommunication Technology Center.  Prior to joining the University of Kansas in 2014, Suzanne was on the faculty at Mississippi State and Pennsylvania State Universities.  She was also a postdoc at the University of Minnesota and earned her Ph.D. in Applied Mathematics from Cornell University.&lt;br /&gt;
&lt;br /&gt;
Suzanne's research efforts focus centrally on parallel scientific computing, more specifically, the design and analysis of unstructured mesh, numerical optimization, model order reduction, and numerical linear algebra algorithms and their applications to medicine, images, electronic circuits, materials, and other applications.  In 2012, she was awarded an NSF Presidential Early CAREER Award (i.e., NSF PECASE Award) by President Obama for her research in computational- and data-enabled science and engineering.  Suzanne also received an NSF CAREER Award for her research on parallel dynamic meshing algorithms, theory, and software for simulation-assisted medical interventions in 2011 and a Summer Faculty Fellowship from the Office of Naval Research in 2009. She has chaired or co-chaired several top conferences in computational- and data-enabled science and engineering including the International Meshing Roundtable in 2010 and the NSF CyberBridges Workshop in 2012-2014 and has served on numerous program committees in the field.  Suzanne is also an Associate Editor for the Book Series in Medicine by De Gruyter Open. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workshops =&lt;br /&gt;
&lt;br /&gt;
== Parallel Software Runtime System Workshop ==&lt;br /&gt;
&lt;br /&gt;
''' When ''' : 24-25 May 2017&lt;br /&gt;
&lt;br /&gt;
''' Place ''' : NASA/LaRC &amp;amp; NIA&lt;br /&gt;
&lt;br /&gt;
''' Participants ''' : Pete Beckman (ANL), Halim Amer (ANL), Dana P. Hammond (NASA LaRC), Nikos Chrisochoides (ODU), Andriy Kot (NCSA,UIUC), Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU)&lt;br /&gt;
&lt;br /&gt;
== Isotropic Advancing Front Local Reconnection Hands-On Workshop ==&lt;br /&gt;
Attendants: NASA/LaRC : Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond, ODU: Nikos Chrisochoides,  Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When: March 20-21, 2015&lt;br /&gt;
&lt;br /&gt;
== HPC Middleware for Mesh Generation and High Order Geometry Approximation Workshop ==&lt;br /&gt;
Attendants : (NASA/LaRC) Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond,(NIA)  Boris Diskin ODU: Nikos Chrisochoides&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; ''' Dr. Navamita Ray ''' &amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Los Alamos National Laboratory, Mathematics and Computer Science Division &lt;br /&gt;
&lt;br /&gt;
:Los Alamos, New Mexico&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 25,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Towards Scalable Framework for Geometry and Meshing in Scientific Computing &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:navamitaray@gmail.com navamitaray@gmail.com]&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
:High fidelity computational modeling of complex, coupled physical phenomena occurring in several scientific fields require accurate resolution of intricate geometry features, generation of good quality unstructured meshes that minimize modeling errors, scalable interfaces to load/manipulate/traverse these meshes in memory and support I/O for check-pointing and in-situ visualization. While several applications tend to create custom HPC solutions to tackle the heterogeneous descriptions of physical models, such approaches lack in generality, interoperability and extensibility making it difficult to maintain scalability of the individual representations. In this talk, we introduce the component-based open-source '''SIGMA''' (Scalable Interfaces for Geometry and Mesh based Applications) toolkit, an effort to address these issues. We focus particularly on its array-based unstructured mesh representation component, Mesh Oriented datABase ('''MOAB''') that provides scalable interfaces to geometry, mesh and solvers to allow seamless integration to computational workflows. &lt;br /&gt;
:[[File: Navamita.jpg|frameless|left|120px]]Based on the three fundamental units consisting of 1) compact array-based memory management for mesh and field data,2) efficient mesh data structures for traverals and querying, and 3) scalable parallel communication algorithms for distributed meshes, MOAB supports various advanced algorithms such as I/O, in-memory mesh modification and refinement, multi-mesh projections, high-order boundary reconstruction, etc. We discuss some of these advanced algorithms and their applications.&lt;br /&gt;
&lt;br /&gt;
:'''Bio''': Dr. Navamita Ray is a postdoctoral appointee and part of the SIGMA team at Mathematics and Computer Science Division at Argonne National Laboratory, Argonne, IL. She has been involved in research on flexible mesh data structures for mesh adaptivity as well as high-fidelity discrete boundary representation. Dr. Ray holds a Ph.D. in Applied Mathematics from the Stony Brook University, where she did graduate work on high-order surface reconstruction and its applications to surface integrals and remeshing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; '''Dr. Xiangmin (Jim) Jiao '''&amp;lt;/u&amp;gt;&lt;br /&gt;
:Associate Professor and AMS Ph.D. Program Director, Department of Applied Mathematics and Statistics and Institute for Advanced Computational Science&lt;br /&gt;
&lt;br /&gt;
:Stony Brook University&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 3,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Robust Adaptive High-Order Geometric and Numerical Methods Based on Weighted Least Squares &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:xiangmin.jiao@stonybrook.edu xiangmin.jiao@stonybrook.edu]:&lt;br /&gt;
&lt;br /&gt;
:'''Homepage''': http://www.ams.sunysb.edu/~jiao&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
:Numerical solutions of partial differential equations (PDEs) are important for modeling and simulations in many scientific and engineering applications. Their solutions over complex geometries pose significant challenges in efficient surface and volume mesh generation and robust numerical discretizations. In this talk, we present our recent work in tackling these challenges from two aspects. First, we will present accurate and robust high-order geometric algorithms on discrete surface, to support high-order surface reconstruction, surface mesh generation and adaptation, and computation of differential geometric operators, without the need to access the CAD models. Secondly, we present some new numerical discretization techniques, including a generalized finite element method based on adaptive extended stencils,and a novel essentially nonoscillatory scheme for hyperbolic conservation laws on unstructured meshes. These new discretizations are more tolerant of mesh quality and allow accurate, stable and efficient computations even on meshes with poorly shaped elements. Based on a unified theoretical framework of weighted least squares, these techniques can significantly simplify the mesh generation processes, especially on supercomputers, and also enable more efficient and robust numerical computations. We will present the theoretical foundation of our methods and demonstrate their applications for mesh generation and numerical solutions of PDEs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
:[[File: Collaborator Jiao.jpg|frameless|left|100px]]&lt;br /&gt;
:'''Bio''': Dr. Xiangmin (Jim) Jiao is an Associated Professor in Applied Mathematics and Computer Science, and also a core faculty member of the Institute for Advanced Computational Science at Stony Brook University. He received his Ph.D. in Computer Science in 2001 from University of Illinois at Urbana-Champaign (UIUC). He was a Research Scientist at the Center for Simulation of Advanced Rockets (CSAR) at UIUC between 2001 and 2005, and then an Assistant Professor in College of Computing at Georgia Institute of Technology between 2005 and 2007. His research interests focus on high-performance geometric and numerical computing, including applied computational and differential geometry, generalized finite difference and finite element methods, multigrid and iterative methods for sparse linear systems, multiphysics coupling, and problem solving environments, with applications in computational fluid dynamics, structural mechanics, biomedical engineering, climate modeling, etc.   &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CNF Imaging Workshop ==&lt;br /&gt;
&lt;br /&gt;
:'''When''': August 2019&lt;br /&gt;
&lt;br /&gt;
:'''Where''': tbd&lt;br /&gt;
&lt;br /&gt;
:'''More Information''': [[CNF_Imaging_Workshop | CNF Imaging Workshop ]]&lt;br /&gt;
&lt;br /&gt;
= Outreach =&lt;br /&gt;
&lt;br /&gt;
== Surgical Planning Lab ==&lt;br /&gt;
''' When ''' : April 8 &amp;amp; 9 , 2016&lt;br /&gt;
''' Where ''' : Brigham and Women's Hospital &amp;amp; Harvard Medical School, Boston&lt;br /&gt;
&lt;br /&gt;
Posters presented in 25th anniversary of SPL : &lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_CBC3D.pdf Lattice-Based Multi-Tissue Mesh Generation for Biomedical Applications]&lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_NRR.pdf Deformable Registration of Pre-Op MRI with iMRI for Brain Tumor Resection: Progress Report]&lt;br /&gt;
&lt;br /&gt;
Nikos Chrisochoides, Andrey Chernikov and Christos Tsolakis : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_Telescopic.pdf Extreme Scale Mesh Generation for Big-Data Medical Images]&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4026</id>
		<title>Events</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4026"/>
				<updated>2019-10-08T22:31:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Professor Dimitrios S. Nikolopoulos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= CS Seminars =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
'''Date:''' October 10, 2019&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries &lt;br /&gt;
&lt;br /&gt;
'''Abstract:'''&lt;br /&gt;
To address scaling limitations of future hardware, computing systems turned to parallelism and distribution. Most of the software and applications in science and engineering, but also applications that we use in our daily lives are actually distributed programs with some components running on edge or IoT devices to serve clients, data collectors or actuators, and other components running on data centers to provide data analytics, simulation, or visualization. The disaggregation of computing services raises new challenges for system challenges. We explores two of these challenges in this talk and discuss some solutions. The first challenge is that many applications necessitate low latency and more analytical power at or near the data sources. We demonstrate a system called TAPAS, which is neural network architecture search exploration engine. TAPAS uses aggressive compression, approximation and learning techniques to avoid entirely the simulation process in exploring neural network architectures. It further uses learning methods to adapt immediately to unseen data sets. TAPAS  runs on a single low-power GPU and can train over 1,000 networks per second. This makes TAPAS suitable for training machine learning models on edge devices with limited resources. The second challenge is the one of scaling the performance and energy-efficiency of the hardware used in the Cloud and the Edge beyond current boundaries. We explore a co-designed compiler/OS/firmware system for characterizing hardware operating boundaries and safely operating hardware outside those boundaries to gain performance at the expense of additional, yet infrequent errors and mitigating actions. We demonstrate that many applications are inherently resilient to extended hardware boundaries and indeed benefit substantially from boundary relaxation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors. He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prof. Anastasia Angelopoulou ==&lt;br /&gt;
'''Date:''' TBD, 2020&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Serious Games and Simulations: applications, challenges and future directions &lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' Serious games and simulations have been steadily increasing their&lt;br /&gt;
use in many sectors of society, particularly in education, defense, science and health. Their main purpose is usually to educate or train the users. In this talk, I will present my work in the area of serious games and simulations for training. I will also discuss challenges in the serious games development and future directions to overcome them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Anastasia.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Anastasia Angelopoulou is an Assistant Professor in Simulation and Gaming at the TSYS School of Computer Science at Columbus State University (CSU). Prior to joining CSU, she was a postdoctoral associate at the Institute for Simulation and Training at University of Central Florida (2016-2018), where she obtained her MSc and PhD in Modeling and Simulation (2015). Her research interests lie in the areas of modeling and simulation and serious games and their applications in domains such as healthcare, military, energy, and education, among others. Her research work has been partially supported by the Office of Naval Research and the National Science Foundation (NSF). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dr. Daniele Panozzo ==&lt;br /&gt;
'''Date:''' TBD, 2020 &lt;br /&gt;
&lt;br /&gt;
'''Title:''' Black-Box Analysis&lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' The numerical solution of partial differential equations (PDE) is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering. Ideally, a PDE solver should be a “black box”: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. &lt;br /&gt;
&lt;br /&gt;
I will present an integrated pipeline, considering meshing and element design as a single challenge, that makes the tradeoff between mesh quality and element complexity/cost local, instead of making an a priori decision for the whole pipeline. I will demonstrate that tackling the two problems jointly offers many advantages, and that a fully black-box meshing and analysis solution is already possible for heat transfer and elasticity problems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Daniele.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Dr. Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). Daniele earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). He received the EUROGRAPHICS Young Researcher Award in 2015 and the NSF CAREER Award in 2017. Daniele is leading the development of libigl (https://github.com/libigl/libigl), an award-winning (EUROGRAPHICS Symposium of Geometry Processing Software Award, 2015) open-source geometry processing library that supports academic and industrial research and practice. Daniele is chairing the Graphics Replicability Stamp (http://www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results. His research interests are in digital fabrication, geometry processing, architectural geometry, and discrete differential geometry.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Visitors =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
School of Electronics, Electrical Engineering and Computer Science  &lt;br /&gt;
&lt;br /&gt;
Queen's University of Belfast, UK&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov 12,2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': New Approaches to Energy-Efficient and Resilient HPC  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:d.nikolopoulos@qub.ac.uk d.nikolopoulos@qub.ac.uk]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cs.qub.ac.uk/~D.Nikolopoulos/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Nikolopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios S. Nikolopoulos is Professor in the School of EEECS, at Queen's University of Belfast and a Royal Society Wolfson Research Fellow. He holds the Chair in High Performance and Distributed Computing and directs the HPDC Research Cluster, a team of 20 academic and research staff. His research explores scalable computing systems for data-driven applications and new computing paradigms at the limits of performance, power and reliability. Dimitrios received the NSF CAREER Award, the DOE CAREER Award, and the IBM Faculty Award during an eight-year tenure in the United States. He has also been awarded the SFI-DEL Investigator Award, a Marie Curie Fellowship, a HiPEAC Fellowship, and seven Best Paper Awards including some from the leading IEEE and ACM conferences in HPC, such as SC, PPoPP, and IPDPS. His research has produced over 150 top-tier outputs and has received extensive (£10.6m as PI/£39.5m as CoI) and highly competitive research funding from the NSF, DOE, EPSRC, SFI, DEL, Royal Academy of Engineering, Royal Society, European Commission and private sector. Dimitrios is a Fellow of the British Computer Society, Senior Member of the IEEE and Senior Member of the ACM. He earned a PhD (2000) in Computer Engineering and Informatics from the University of Patras. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Lieber, Baruch Barry ==&lt;br /&gt;
Department of Neurosurgery  &lt;br /&gt;
&lt;br /&gt;
Stony Brook University&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov. 6, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:Baruch.Lieber@stonybrookmedicine.edu Baruch.Lieber@stonybrookmedicine.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://neuro.stonybrookmedicine.edu/about/faculty/lieber &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: LieberB.jpg|frameless|left|125px |'''Professor  Lieber, Baruch Barry''' ]]&lt;br /&gt;
'''Bio:'''  Barry Lieber attended Tel-Aviv University and received a B.Sc. in Mechanical Engineering in 1979. He then attended Georgia Tech and received M.Sc. in 1982 and a Ph.D. in 1985, both in Aerospace Engineering Ph.D. working with Dr. Don P. Giddens. Barry Lieber was a Postdoctoral Fellow from 1985-1987 at the Department of Mechanical Engineering at Georgia Tech and also completed a summer fellowship at Imperial College London in 1986. In 1987 Barry Lieber joined the faculty of the Department of Mechanical and Aerospace Engineering at the State University of New York at Buffalo as Assistant Professor. In 1993 he was promoted to the rank of Associate Professor with tenure and in 1998 was promoted to full professor. In 1994 he became Research Professor of Neurosurgery and in 1997 he became the Director of the Center for Bioengineering at the State University of New York at Buffalo, both position he held until his departure from the university in 2001 to Join the University of Miami as professor in the Department of Biomedical Engineering with a joined appointment in the Department of Radiology. In 2010 he joined the State University of New York at Stony Brook at the rank of professor in the department of Neurosurgery and also serves as program faculty in the department of Biomedical Engineering. Barry Lieber was elected as fellow of the American Institute for Medical and Biomedical Engineering in 1999. He was elected as fellow of the American Society of mechanical Engineers in 2005 and served as the Chairman of the Division of Bioengineering of the American Society of Mechanical Engineers in 2009. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Marek Behr ==&lt;br /&gt;
Chair for Computational Analysis of Technical &lt;br /&gt;
&lt;br /&gt;
RWTH Aachen University&lt;br /&gt;
&lt;br /&gt;
Systems, Schinkelstr. 2, 52062 Aachen, Germany&lt;br /&gt;
&lt;br /&gt;
'''When''': July 31, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Enhanced Surface Definition in Moving-Boundary Flow Simulation&lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:behr@cats.rwth-aachen.de behr@cats.rwth-aachen.de]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cats.rwth-aachen.de&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.&lt;br /&gt;
&lt;br /&gt;
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.&lt;br /&gt;
&lt;br /&gt;
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.&lt;br /&gt;
&lt;br /&gt;
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].&lt;br /&gt;
&lt;br /&gt;
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, &amp;quot;Numerical shape optimization as an approach to extrusion die design&amp;quot;, Finite Elements in Analysis and Design, 61, 35–43 (2012).&lt;br /&gt;
&lt;br /&gt;
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, &amp;quot;NURBS-Enhanced Finite Element Method (NEFEM)&amp;quot;, International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).&lt;br /&gt;
&lt;br /&gt;
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, &amp;quot;On the Usage of NURBS as Interface Representation in Free-Surface Flows&amp;quot;, International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).&lt;br /&gt;
&lt;br /&gt;
[4] M. Behr, &amp;quot;Simplex Space-Time Meshes in Finite Element Simulations&amp;quot;, International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_Marek_Behr1.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio:''' Prof. Marek Behr obtained his Bachelor's and Ph.D. degrees in Aerospace Engineering and Mechanics form the University of Minnesota in Minneapolis. After faculty appointments at the University of Minnesota and at Rice University in Houston, he was appointed in 2004 as a Professor of Mechanical Engineering and holder of the Chair for Computational Analysis of Technical Systems at the RWTH Aachen University. Since 2006, he is the Scientific Director of the Aachen Institute for Advanced Study in Computational Engineering Science, focusing on inverse problems in engineering and funded in the framework of the Excellence Initiative in Germany. Behr advises or has advised over 40 doctoral students, and has published over 65 refereed journal articles and a similar number of conference publications and book chapters. Behr is one of the main developers of the stabilized space-time finite element formulation for deforming-domain flow problems, which has been recently extended to unstructured space-time meshes. He is a long-time expert on parallel computation and large-scale flow simulations and on numerical methods for non-Newtonian fluids. He is a member of several advisory and editorial boards of international journals, and the member of the executive council of the German Association for Computational Mechanics and of the general council of the International Association for Computational Mechanics. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Christos Antonopoulos ==&lt;br /&gt;
Department of Electrical and Computer Engineering, &lt;br /&gt;
&lt;br /&gt;
University of Thessaly, Greece&lt;br /&gt;
&lt;br /&gt;
'''When''': June 25, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:cda@inf.uth.gr cda@inf.uth.gr]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.inf.uth.gr/~cda&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns.  Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware.  We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Antonopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Yongjie Jessica Zhang ==&lt;br /&gt;
Associate Professor in Mechanical Engineering &amp;amp; Courtesy Appointment in Biomedical Engineering&lt;br /&gt;
&lt;br /&gt;
Carnegie Mellon University&lt;br /&gt;
&lt;br /&gt;
'''When''': April 24, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Image-Based Mesh Generation and Volumetric Spline Modeling for Isogeometric Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:jessicaz@andrew.cmu.edu jessicaz@andrew.cmu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.andrew.cmu.edu/~jessicaz&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
With finite element methods and scanning technologies seeing increased use in many research areas, there is an emerging need for high-fidelity geometric modeling and mesh generation of spatially realistic domains.  In this talk, I will highlight our research in three areas: image-based mesh generation for complicated domains, trivariate spline modeling for isogeometric analysis, as well as biomedical, material sciences and engineering applications. I will first present advances and challenges in image-based geometric modeling and meshing along with a comprehensive computational framework, which integrates image processing, geometric modeling, mesh generation and quality improvement with multi-scale analysis at molecular, cellular, tissue and organ scales. Different from other existing methods, the presented framework supports five unique features: high-fidelity meshing for heterogeneous domains with topology ambiguity resolved; multiscale geometric modeling for biomolecular complexes; automatic all-hexahedral mesh generation with sharp feature preservation; robust quality improvement for non-manifold meshes; and guaranteed-quality meshing. These unique capabilities enable accurate, stable, and efficient mechanics calculation for many biomedicine, materials science and engineering applications.&lt;br /&gt;
&lt;br /&gt;
In the second part of this talk, I will show our latest research on volumetric spline parameterization, which contributes directly to the integration of design and analysis, the root idea of isogeometric analysis. For arbitrary topology objects, we first build a polycube whose topology is equivalent to the input geometry and it serves as the parametric domain for the following trivariate T-spline construction. Boolean operations and geometry skeleton can also be used to preserve surface features. A parametric mapping is then used to build a one-to-one correspondence between the input geometry and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement, and applying templates or truncation mechanism couple with subdivision to handle extraordinary nodes. The parametric mapping method has been further extended to conformal solid T-spline construction with the input surface parameterization preserved and trimming curves handled.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Jessica.jpg|frameless|left|120px]]&lt;br /&gt;
'''Bio''': Yongjie Jessica Zhang is an Associate Professor in Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics, all from Tsinghua University, China, and M.Eng. in Aerospace Engineering and Engineering Mechanics, and Ph.D. in Computational Engineering and Sciences from the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element method, isogeometric analysis and their application in computational biomedicine, material sciences and engineering. She has co-authored over 100 publications in peer-reviewed journals and conference proceedings. She is the recipient of Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, USACM Gallagher Young Investigator Award, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, George Tallman Ladd Research Award, and Donald L. &amp;amp; Rhonda Struminger Faculty Fellow. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor David Marcum ==&lt;br /&gt;
Billie J. Ball Professor and  Chief Scientist&lt;br /&gt;
&lt;br /&gt;
Center for Advanced Vehicular Systems, Mechanical Engineering Department, Mississippi State University&lt;br /&gt;
&lt;br /&gt;
'''When''': March 20, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''':  AFLR Unstructured Meshing  Research Activities at CFD Modeling and Simulation Research at the Center for Advanced Vehicular Systems &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:marcum@cavs.msstate.edu marcum@cavs.msstate.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.me.msstate.edu/faculty/marcum/marcum.html &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Mesh generation and associated geometry preparation are critical aspects of any computational field simulation (CFS) process. In particular the mesh used can have a significant impact on accuracy, effectiveness, and efficiency of the CFS solver. Further, typical users spend a considerable portion of their time for the overall effort on mesh and geometry issues. All of this is particularly critical for CFD applications.  AFLR is an unstructured mesh generator designed with a focus on addressing these issues for complex geometries. It is widely used, readily available to Government and Academic users, and has been very successful with relevant problems. AFLR volume and surface meshing is also directly incorporated in several systems, including: DoD CREATE-MG Capstone, Lockheed Martin/DoD ACAD, Boeing MADCAP, MSU SolidMesh, and Altair HyperMesh. In this talk we will provide an overview of this technology, future directions, and plans for multi-tasking/parallel operation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Marcum David.jpg|frameless|left|125px]]&lt;br /&gt;
'''Bio''': Dr. Marcum is Professor of Mechanical Engineering at Mississippi State University (MSU) and Chief Scientist for CFD within the Center for Advanced Vehicular Systems (CAVS). He has 30 years of experience in development of CFD and unstructured grid technology. Before joining MSU in 1991, Dr. Marcum was a Scientist and Senior Engineer at McDonnell Douglas Research Laboratories and Boeing Commercial Airplane Company. He received his Ph.D. from Purdue University in 1985. Prior to that he was a Senior Engineer from 1978 through 1983 at TRW Ross Gear Division. At MSU, Dr. Marcum served as Thrust Leader and Director of the NSF ERC for Computational Field Simulation. As Director, he led the transition from graduated NSF ERC to its current form as the High Performance Computing Collaboratory (HPC²). Dr. Marcum also served as Deputy Director and Director of the SimCenter (an HPC² member center and currently merged within CAVS). He is currently Chief Scientist for CFD within CAVS (also an HPC² member center). As Chief Scientist for CFD, he is directly involved in the research activities of a team of multi-disciplinary researchers working on CFD related projects for DoD, DoE, NASA, NSF, and industry. Computational tools produced by these projects at MSU within the ERC, SimCenter and CAVS, and in particular Dr. Marcum’s AFLR unstructured mesh generator, are in use throughout aerospace, automotive and DoD organizations. Dr. Marcum is widely recognized for his contributions to unstructured grid technology and is currently Honorary Professor at University of Wales, Swansea, UK and a previous Invited Professor at INRIA, Paris-Rocquencourt, France. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Kyle Gallivan ==&lt;br /&gt;
Professor Mathematics Department&lt;br /&gt;
&lt;br /&gt;
Florida State University&lt;br /&gt;
&lt;br /&gt;
'''When''': January 23, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Riemannian Optimization for Elastic Shape Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:kgallivan@fsu.edu kgallivan@fsu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.math.fsu.edu/~gallivan/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
In elastic shape analysis, a representation of a shape is invariant to translation, scaling, rotation and reparameterization and important problems (such as computing the distance and geodesic between two curves, the mean of a set of curves, and other statistical analyses) require finding a best rotation and re-parameterization between two curves. In this talk, I focus on this key subproblem and study different tools for optimizations on the joint group of rotations and re-parameterizations. I will give a brief account of a novel Riemannian optimization approach and evaluate its use in computing the distance between two curves and classification using two public data sets. Experiments show significant advantages in computational time and reliability in performance compared to the current state-of-the-art method.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_marcum.jpg|frameless|left|250px]]&lt;br /&gt;
'''Bio''': Kyle A. Gallivan is a Professor of Mathematics at Florida State University. Gallivan received the Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1983 under the direction of C. W. Gear. He worked on special purpose signal processors in the Government Aerospace Systems Division of Harris Corporation.  He was a research computer scientist at the Center for Supercomputing Research and Development at the University of Illinois from 1985 until 1993 when he moved to the Department of Electrical and Computer Engineering. From 1997 to 2008 he was a member of the Department of Computer Science at Florida State University (FSU) and a member of the Computational Science and Engineering group becoming a full Professor in 1999. He became a Professor of Mathematics at FSU in 2008 and was selected the 2012 Pascal Professor for the Faculty of Sciences of the University of Leiden in the Netherlands. He has been a Visiting Professor at the Catholic University of Louvain in Belgium multiple times through a long-standing research collaboration with colleagues there.&lt;br /&gt;
&lt;br /&gt;
Over the years Gallivan's research has included: design and analysis of high-performance numerical algorithms, pioneering work on block algorithms for numerical linear algebra, performance analysis of the experimental Cedar system, restructuring compilers, model reduction of large scale differential equations, and high-performance codes for application such as ocean circulation, circuit simulation and the codes in the Perfect Benchmark Suite. Gallivan's current main research concerns optimization algorithms on Riemannian manifolds and their use in applications such as shape analysis, statistics, and signal/image processing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Suzanne M. Shontz ==&lt;br /&gt;
Department of Electrical Engineering and Computer Science&lt;br /&gt;
&lt;br /&gt;
University of Kansas&lt;br /&gt;
&lt;br /&gt;
'''When''': November 7, 2014, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''':E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': A parallel log barrier for mesh quality improvement and updating &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:shontz@ku.edu shontz@ku.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://people.eecs.ku.edu/~shontz/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
There are numerous applications in science, engineering, and medicine which require high-quality meshes, i.e., discretizations of the geometry, for use in computational simulations.  For example, meshes have been used to enable accurate prediction of the performance, reliability, and safety of solid propellant rockets.  The movie industry in Hollywood typically employs dynamic meshes in order to animate characters in films.  Large-scale applications often require meshes with millions to billions of elements that are generated and manipulated in parallel.  The advent of supercomputers with hundreds to thousands of cores has made this possible.&lt;br /&gt;
&lt;br /&gt;
The focus of my talk will be on parallel algorithms for mesh quality improvement and mesh untangling.  Such algorithms are needed, for example, when a large-scale mesh deformation is applied and tangled and/or low quality meshes are the result.  Prior efforts in these areas have focused on the development of parallel algorithms for mesh generation and local mesh quality improvement in which only one vertex is moved at a time.  In contrast, we are concerned with the development of parallel global algorithms for mesh quality improvement and untangling in which all vertices are moved simultaneously. I will present our parallel log-barrier mesh quality improvement and untangling algorithms for distributed-memory machines.  Our algorithms simultaneously move the mesh vertices in order to optimize a log-barrier objective function that was designed to improve the quality of the worst quality mesh elements. We employ an edge coloring-based algorithm for synchronizing unstructured communication among the processes executing the log-barrier mesh optimization algorithm.  The main contribution of this work is a generic scheme for global mesh optimization.  The algorithm shows greater strong scaling efficiency compared to an existing parallel mesh quality improvement technique. Portions of this talk represent joint work with Shankar Prasad Sastry, University of Utah, and Stephen Vavasis, University of Waterloo.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_shontz.jpg|frameless|left|150px]]&lt;br /&gt;
'''Bio''':Suzanne M. Shontz is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Kansas. She is also affiliated with the Graduate Program in Bioengineering and the Information and Telecommunication Technology Center.  Prior to joining the University of Kansas in 2014, Suzanne was on the faculty at Mississippi State and Pennsylvania State Universities.  She was also a postdoc at the University of Minnesota and earned her Ph.D. in Applied Mathematics from Cornell University.&lt;br /&gt;
&lt;br /&gt;
Suzanne's research efforts focus centrally on parallel scientific computing, more specifically, the design and analysis of unstructured mesh, numerical optimization, model order reduction, and numerical linear algebra algorithms and their applications to medicine, images, electronic circuits, materials, and other applications.  In 2012, she was awarded an NSF Presidential Early CAREER Award (i.e., NSF PECASE Award) by President Obama for her research in computational- and data-enabled science and engineering.  Suzanne also received an NSF CAREER Award for her research on parallel dynamic meshing algorithms, theory, and software for simulation-assisted medical interventions in 2011 and a Summer Faculty Fellowship from the Office of Naval Research in 2009. She has chaired or co-chaired several top conferences in computational- and data-enabled science and engineering including the International Meshing Roundtable in 2010 and the NSF CyberBridges Workshop in 2012-2014 and has served on numerous program committees in the field.  Suzanne is also an Associate Editor for the Book Series in Medicine by De Gruyter Open. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workshops =&lt;br /&gt;
&lt;br /&gt;
== Parallel Software Runtime System Workshop ==&lt;br /&gt;
&lt;br /&gt;
''' When ''' : 24-25 May 2017&lt;br /&gt;
&lt;br /&gt;
''' Place ''' : NASA/LaRC &amp;amp; NIA&lt;br /&gt;
&lt;br /&gt;
''' Participants ''' : Pete Beckman (ANL), Halim Amer (ANL), Dana P. Hammond (NASA LaRC), Nikos Chrisochoides (ODU), Andriy Kot (NCSA,UIUC), Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU)&lt;br /&gt;
&lt;br /&gt;
== Isotropic Advancing Front Local Reconnection Hands-On Workshop ==&lt;br /&gt;
Attendants: NASA/LaRC : Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond, ODU: Nikos Chrisochoides,  Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When: March 20-21, 2015&lt;br /&gt;
&lt;br /&gt;
== HPC Middleware for Mesh Generation and High Order Geometry Approximation Workshop ==&lt;br /&gt;
Attendants : (NASA/LaRC) Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond,(NIA)  Boris Diskin ODU: Nikos Chrisochoides&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; ''' Dr. Navamita Ray ''' &amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Los Alamos National Laboratory, Mathematics and Computer Science Division &lt;br /&gt;
&lt;br /&gt;
:Los Alamos, New Mexico&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 25,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Towards Scalable Framework for Geometry and Meshing in Scientific Computing &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:navamitaray@gmail.com navamitaray@gmail.com]&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
:High fidelity computational modeling of complex, coupled physical phenomena occurring in several scientific fields require accurate resolution of intricate geometry features, generation of good quality unstructured meshes that minimize modeling errors, scalable interfaces to load/manipulate/traverse these meshes in memory and support I/O for check-pointing and in-situ visualization. While several applications tend to create custom HPC solutions to tackle the heterogeneous descriptions of physical models, such approaches lack in generality, interoperability and extensibility making it difficult to maintain scalability of the individual representations. In this talk, we introduce the component-based open-source '''SIGMA''' (Scalable Interfaces for Geometry and Mesh based Applications) toolkit, an effort to address these issues. We focus particularly on its array-based unstructured mesh representation component, Mesh Oriented datABase ('''MOAB''') that provides scalable interfaces to geometry, mesh and solvers to allow seamless integration to computational workflows. &lt;br /&gt;
:[[File: Navamita.jpg|frameless|left|120px]]Based on the three fundamental units consisting of 1) compact array-based memory management for mesh and field data,2) efficient mesh data structures for traverals and querying, and 3) scalable parallel communication algorithms for distributed meshes, MOAB supports various advanced algorithms such as I/O, in-memory mesh modification and refinement, multi-mesh projections, high-order boundary reconstruction, etc. We discuss some of these advanced algorithms and their applications.&lt;br /&gt;
&lt;br /&gt;
:'''Bio''': Dr. Navamita Ray is a postdoctoral appointee and part of the SIGMA team at Mathematics and Computer Science Division at Argonne National Laboratory, Argonne, IL. She has been involved in research on flexible mesh data structures for mesh adaptivity as well as high-fidelity discrete boundary representation. Dr. Ray holds a Ph.D. in Applied Mathematics from the Stony Brook University, where she did graduate work on high-order surface reconstruction and its applications to surface integrals and remeshing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; '''Dr. Xiangmin (Jim) Jiao '''&amp;lt;/u&amp;gt;&lt;br /&gt;
:Associate Professor and AMS Ph.D. Program Director, Department of Applied Mathematics and Statistics and Institute for Advanced Computational Science&lt;br /&gt;
&lt;br /&gt;
:Stony Brook University&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 3,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Robust Adaptive High-Order Geometric and Numerical Methods Based on Weighted Least Squares &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:xiangmin.jiao@stonybrook.edu xiangmin.jiao@stonybrook.edu]:&lt;br /&gt;
&lt;br /&gt;
:'''Homepage''': http://www.ams.sunysb.edu/~jiao&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
:Numerical solutions of partial differential equations (PDEs) are important for modeling and simulations in many scientific and engineering applications. Their solutions over complex geometries pose significant challenges in efficient surface and volume mesh generation and robust numerical discretizations. In this talk, we present our recent work in tackling these challenges from two aspects. First, we will present accurate and robust high-order geometric algorithms on discrete surface, to support high-order surface reconstruction, surface mesh generation and adaptation, and computation of differential geometric operators, without the need to access the CAD models. Secondly, we present some new numerical discretization techniques, including a generalized finite element method based on adaptive extended stencils,and a novel essentially nonoscillatory scheme for hyperbolic conservation laws on unstructured meshes. These new discretizations are more tolerant of mesh quality and allow accurate, stable and efficient computations even on meshes with poorly shaped elements. Based on a unified theoretical framework of weighted least squares, these techniques can significantly simplify the mesh generation processes, especially on supercomputers, and also enable more efficient and robust numerical computations. We will present the theoretical foundation of our methods and demonstrate their applications for mesh generation and numerical solutions of PDEs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
:[[File: Collaborator Jiao.jpg|frameless|left|100px]]&lt;br /&gt;
:'''Bio''': Dr. Xiangmin (Jim) Jiao is an Associated Professor in Applied Mathematics and Computer Science, and also a core faculty member of the Institute for Advanced Computational Science at Stony Brook University. He received his Ph.D. in Computer Science in 2001 from University of Illinois at Urbana-Champaign (UIUC). He was a Research Scientist at the Center for Simulation of Advanced Rockets (CSAR) at UIUC between 2001 and 2005, and then an Assistant Professor in College of Computing at Georgia Institute of Technology between 2005 and 2007. His research interests focus on high-performance geometric and numerical computing, including applied computational and differential geometry, generalized finite difference and finite element methods, multigrid and iterative methods for sparse linear systems, multiphysics coupling, and problem solving environments, with applications in computational fluid dynamics, structural mechanics, biomedical engineering, climate modeling, etc.   &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CNF Imaging Workshop ==&lt;br /&gt;
&lt;br /&gt;
:'''When''': August 2019&lt;br /&gt;
&lt;br /&gt;
:'''Where''': tbd&lt;br /&gt;
&lt;br /&gt;
:'''More Information''': [[CNF_Imaging_Workshop | CNF Imaging Workshop ]]&lt;br /&gt;
&lt;br /&gt;
= Outreach =&lt;br /&gt;
&lt;br /&gt;
== Surgical Planning Lab ==&lt;br /&gt;
''' When ''' : April 8 &amp;amp; 9 , 2016&lt;br /&gt;
''' Where ''' : Brigham and Women's Hospital &amp;amp; Harvard Medical School, Boston&lt;br /&gt;
&lt;br /&gt;
Posters presented in 25th anniversary of SPL : &lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_CBC3D.pdf Lattice-Based Multi-Tissue Mesh Generation for Biomedical Applications]&lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_NRR.pdf Deformable Registration of Pre-Op MRI with iMRI for Brain Tumor Resection: Progress Report]&lt;br /&gt;
&lt;br /&gt;
Nikos Chrisochoides, Andrey Chernikov and Christos Tsolakis : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_Telescopic.pdf Extreme Scale Mesh Generation for Big-Data Medical Images]&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4025</id>
		<title>Events</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Events&amp;diff=4025"/>
				<updated>2019-10-08T22:30:29Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* CS Seminars */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= CS Seminars =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
'''Date:''' October 10, 2019&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
To address scaling limitations of future hardware, computing systems turned to parallelism and distribution. Most of the software and applications in science and engineering, but also applications that we use in our daily lives are actually distributed programs with some components running on edge or IoT devices to serve clients, data collectors or actuators, and other components running on datacenters to provide data analytics, simulation, or visualization. The disaggregation of computing services raises new challenges for system challenges. We explores two of these challenges in this talk and discuss some solutions. The first challenge is that many applications necessitate low latency and more analytical power at or near the data sources. We demonstrate a system called TAPAS, which is neural network architecture search exploration engine. TAPAS uses aggressive compression, approximation and learning techniques to avoid entirely the simulation process in exploring neural network architectures. It further uses learning methods to adapt immediately to unseen data sets. TAPAS  runs on a single low-power GPU and can train over 1,000 networks per second. This makes TAPAS suitable for training machine learning models on edge devices with limited resources. The second challenge is the one of scaling the performance and energy-efficiency of the hardware used in the Cloud and the Edge beyond current boundaries. We explore a co-designed compiler/OS/firmware system for characterizing hardware operating boundaries and safely operating hardware outside those boundaries to gain performance at the expense of additional, yet infrequent errors and mitigating actions. We demonstrate that many applications are inherently resilient to extended hardware boundaries and indeed benefit substantially from boundary relaxation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors. He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prof. Anastasia Angelopoulou ==&lt;br /&gt;
'''Date:''' TBD, 2020&lt;br /&gt;
&lt;br /&gt;
'''Title:''' Serious Games and Simulations: applications, challenges and future directions &lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' Serious games and simulations have been steadily increasing their&lt;br /&gt;
use in many sectors of society, particularly in education, defense, science and health. Their main purpose is usually to educate or train the users. In this talk, I will present my work in the area of serious games and simulations for training. I will also discuss challenges in the serious games development and future directions to overcome them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Anastasia.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Anastasia Angelopoulou is an Assistant Professor in Simulation and Gaming at the TSYS School of Computer Science at Columbus State University (CSU). Prior to joining CSU, she was a postdoctoral associate at the Institute for Simulation and Training at University of Central Florida (2016-2018), where she obtained her MSc and PhD in Modeling and Simulation (2015). Her research interests lie in the areas of modeling and simulation and serious games and their applications in domains such as healthcare, military, energy, and education, among others. Her research work has been partially supported by the Office of Naval Research and the National Science Foundation (NSF). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dr. Daniele Panozzo ==&lt;br /&gt;
'''Date:''' TBD, 2020 &lt;br /&gt;
&lt;br /&gt;
'''Title:''' Black-Box Analysis&lt;br /&gt;
&lt;br /&gt;
'''Abstract:''' The numerical solution of partial differential equations (PDE) is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering. Ideally, a PDE solver should be a “black box”: the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest. To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. &lt;br /&gt;
&lt;br /&gt;
I will present an integrated pipeline, considering meshing and element design as a single challenge, that makes the tradeoff between mesh quality and element complexity/cost local, instead of making an a priori decision for the whole pipeline. I will demonstrate that tackling the two problems jointly offers many advantages, and that a fully black-box meshing and analysis solution is already possible for heat transfer and elasticity problems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Daniele.jpg|frameless|left|150px]]&lt;br /&gt;
'''Short Bio:''' Dr. Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). Daniele earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). He received the EUROGRAPHICS Young Researcher Award in 2015 and the NSF CAREER Award in 2017. Daniele is leading the development of libigl (https://github.com/libigl/libigl), an award-winning (EUROGRAPHICS Symposium of Geometry Processing Software Award, 2015) open-source geometry processing library that supports academic and industrial research and practice. Daniele is chairing the Graphics Replicability Stamp (http://www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results. His research interests are in digital fabrication, geometry processing, architectural geometry, and discrete differential geometry.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Visitors =&lt;br /&gt;
== Professor Dimitrios S. Nikolopoulos ==&lt;br /&gt;
School of Electronics, Electrical Engineering and Computer Science  &lt;br /&gt;
&lt;br /&gt;
Queen's University of Belfast, UK&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov 12,2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': New Approaches to Energy-Efficient and Resilient HPC  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:d.nikolopoulos@qub.ac.uk d.nikolopoulos@qub.ac.uk]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cs.qub.ac.uk/~D.Nikolopoulos/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
This talk explores new and unconventional directions towards improving the energy-efficiency of HPC systems. Taking a workload-driven approach, we explore micro-servers with programmable accelerators; non-volatile main memory; workload auto-scaling and structured approximate computing. Our research in these has achieved significant gains in energy-efficiency while meeting application-specific QoS targets. The talk also reflects on a number of UK and European efforts to create a new energy-efficient and disaggregated ICT ecosystem for data analytics.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Nikolopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Dimitrios S. Nikolopoulos is Professor in the School of EEECS, at Queen's University of Belfast and a Royal Society Wolfson Research Fellow. He holds the Chair in High Performance and Distributed Computing and directs the HPDC Research Cluster, a team of 20 academic and research staff. His research explores scalable computing systems for data-driven applications and new computing paradigms at the limits of performance, power and reliability. Dimitrios received the NSF CAREER Award, the DOE CAREER Award, and the IBM Faculty Award during an eight-year tenure in the United States. He has also been awarded the SFI-DEL Investigator Award, a Marie Curie Fellowship, a HiPEAC Fellowship, and seven Best Paper Awards including some from the leading IEEE and ACM conferences in HPC, such as SC, PPoPP, and IPDPS. His research has produced over 150 top-tier outputs and has received extensive (£10.6m as PI/£39.5m as CoI) and highly competitive research funding from the NSF, DOE, EPSRC, SFI, DEL, Royal Academy of Engineering, Royal Society, European Commission and private sector. Dimitrios is a Fellow of the British Computer Society, Senior Member of the IEEE and Senior Member of the ACM. He earned a PhD (2000) in Computer Engineering and Informatics from the University of Patras. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Lieber, Baruch Barry ==&lt;br /&gt;
Department of Neurosurgery  &lt;br /&gt;
&lt;br /&gt;
Stony Brook University&lt;br /&gt;
&lt;br /&gt;
'''When''': Nov. 6, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Flow Diverters to Cure Cerebral Aneurysms a Case Study - From Concept to Clinical  &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:Baruch.Lieber@stonybrookmedicine.edu Baruch.Lieber@stonybrookmedicine.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://neuro.stonybrookmedicine.edu/about/faculty/lieber &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Ten to fifteen million Americans are estimated to harbor intracranial aneurysms (abnormal bulges of blood vessels located in the brain) that can rupture and expel blood directly into the brain space outside of the arteries causing a stroke. A flow diverter, a refined tubular mesh-like device that is inserted through a small incision in the groin area (no need for open brain surgery) and navigated through a catheter into cerebral arteries to treat brain aneurysms is delivered into the artery carrying the aneurysm. The permeability of the device is optimized such that it significantly reduces the blood flow in the aneurysm, while keeping small side branches of the artery open to supply critical brain tissue. The biocompatible device elicits a healthy scar-response from the body that lines the inner metal surface of the device with biological tissue, thus restoring the diseased arterial segment to its normal state. Refinement in the design of such devices and prediction of their long term creative effect, which usually occurs over a period of months can be significantly helped by computer modeling and simulations of the flow alteration such devices impart to the aneurysm. The evolution of these devices will be discussed from conception to their current clinical use.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: LieberB.jpg|frameless|left|125px |'''Professor  Lieber, Baruch Barry''' ]]&lt;br /&gt;
'''Bio:'''  Barry Lieber attended Tel-Aviv University and received a B.Sc. in Mechanical Engineering in 1979. He then attended Georgia Tech and received M.Sc. in 1982 and a Ph.D. in 1985, both in Aerospace Engineering Ph.D. working with Dr. Don P. Giddens. Barry Lieber was a Postdoctoral Fellow from 1985-1987 at the Department of Mechanical Engineering at Georgia Tech and also completed a summer fellowship at Imperial College London in 1986. In 1987 Barry Lieber joined the faculty of the Department of Mechanical and Aerospace Engineering at the State University of New York at Buffalo as Assistant Professor. In 1993 he was promoted to the rank of Associate Professor with tenure and in 1998 was promoted to full professor. In 1994 he became Research Professor of Neurosurgery and in 1997 he became the Director of the Center for Bioengineering at the State University of New York at Buffalo, both position he held until his departure from the university in 2001 to Join the University of Miami as professor in the Department of Biomedical Engineering with a joined appointment in the Department of Radiology. In 2010 he joined the State University of New York at Stony Brook at the rank of professor in the department of Neurosurgery and also serves as program faculty in the department of Biomedical Engineering. Barry Lieber was elected as fellow of the American Institute for Medical and Biomedical Engineering in 1999. He was elected as fellow of the American Society of mechanical Engineers in 2005 and served as the Chairman of the Division of Bioengineering of the American Society of Mechanical Engineers in 2009. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Marek Behr ==&lt;br /&gt;
Chair for Computational Analysis of Technical &lt;br /&gt;
&lt;br /&gt;
RWTH Aachen University&lt;br /&gt;
&lt;br /&gt;
Systems, Schinkelstr. 2, 52062 Aachen, Germany&lt;br /&gt;
&lt;br /&gt;
'''When''': July 31, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Enhanced Surface Definition in Moving-Boundary Flow Simulation&lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:behr@cats.rwth-aachen.de behr@cats.rwth-aachen.de]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.cats.rwth-aachen.de&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Moving-boundary flow simulations are an important design and analysis tool in many areas of engineering, including civil and biomedical engineering, as well as production engineering [1]. While interface-capturing offers unmatched flexibility for complex free-surface motion, the interface-tracking approach is very attractive due to its better mass conservation properties at low resolution. We focus on interface-tracking moving-boundary flow simulations based on stabilized discretizations of Navier-Stokes equations, space-time formulations on moving grids, and mesh update mechanisms based on elasticity. However, we also develop techniques that promise to increase the fidelity of the interface-capturing methods.&lt;br /&gt;
&lt;br /&gt;
In order to obtain accurate and smooth shape description of the free surface, as well as accurate flow approximation on coarse meshes, the approach of NURBS-enhanced finite elements (NEFEM) [2] is being applied to various aspects of free-surface flow computations. In NEFEM, certain parts of the boundary of the computational domain are represented using non-uniform rational B-splines (NURBS), therefore making it an effective technique to accurately treat curved boundaries, not only in terms of geometry representation, but also in terms of solution accuracy.&lt;br /&gt;
&lt;br /&gt;
As a step in the direction of NEFEM, the benefits of a purely geometrical NURBS representation of the free-surface could already be shown [3]. The first results with a full NEFEM approach for the flow variables in the vicinity of the moving free surface have also been obtained. The applications include both production engineering, i.e., die swell in plastics processing simulation, and safety engineering, i.e., sloshing phenomena in fluid tanks subjected to external excitation.&lt;br /&gt;
&lt;br /&gt;
Space-time approaches offer some not-yet-fully-exploited advantages when compared to standard discretizations (finite-difference in time and finite-element in space, using either method of Rothe or method of lines); among them, the potential to allow some degree of unstructured space-time meshing. A method for generating simplex space-time meshes is presented, allowing arbitrary temporal refinement in selected portions of space-time slabs. The method increases the flexibility of space-time discretizations, even in the absence of dedicated space-time mesh generation tools. The resulting tetrahedral (for 2D problems) and pentatope (for 3D problems) meshes are tested in the context of advection-diffusion equation, and are shown to provide reasonable solutions, while enabling varying time refinement in portions of the domain [4].&lt;br /&gt;
&lt;br /&gt;
[1] S. Elgeti, M. Probst, C. Windeck, M. Behr, W. Michaeli, and C. Hopmann, &amp;quot;Numerical shape optimization as an approach to extrusion die design&amp;quot;, Finite Elements in Analysis and Design, 61, 35–43 (2012).&lt;br /&gt;
&lt;br /&gt;
[2] R. Sevilla, S. Fernandez-Mendez and A. Huerta, &amp;quot;NURBS-Enhanced Finite Element Method (NEFEM)&amp;quot;, International Journal for Numerical Methods in Engineering, 76, 56–83 (2008).&lt;br /&gt;
&lt;br /&gt;
[3] S. Elgeti, H. Sauerland, L. Pauli, and M. Behr, &amp;quot;On the Usage of NURBS as Interface Representation in Free-Surface Flows&amp;quot;, International Journal for Numerical Methods in Fluids, 69, 73–87 (2012).&lt;br /&gt;
&lt;br /&gt;
[4] M. Behr, &amp;quot;Simplex Space-Time Meshes in Finite Element Simulations&amp;quot;, International Journal for Numerical Methods in Fluids, 57, 1421–1434, (2008).&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_Marek_Behr1.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio:''' Prof. Marek Behr obtained his Bachelor's and Ph.D. degrees in Aerospace Engineering and Mechanics form the University of Minnesota in Minneapolis. After faculty appointments at the University of Minnesota and at Rice University in Houston, he was appointed in 2004 as a Professor of Mechanical Engineering and holder of the Chair for Computational Analysis of Technical Systems at the RWTH Aachen University. Since 2006, he is the Scientific Director of the Aachen Institute for Advanced Study in Computational Engineering Science, focusing on inverse problems in engineering and funded in the framework of the Excellence Initiative in Germany. Behr advises or has advised over 40 doctoral students, and has published over 65 refereed journal articles and a similar number of conference publications and book chapters. Behr is one of the main developers of the stabilized space-time finite element formulation for deforming-domain flow problems, which has been recently extended to unstructured space-time meshes. He is a long-time expert on parallel computation and large-scale flow simulations and on numerical methods for non-Newtonian fluids. He is a member of several advisory and editorial boards of international journals, and the member of the executive council of the German Association for Computational Mechanics and of the general council of the International Association for Computational Mechanics. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor  Christos Antonopoulos ==&lt;br /&gt;
Department of Electrical and Computer Engineering, &lt;br /&gt;
&lt;br /&gt;
University of Thessaly, Greece&lt;br /&gt;
&lt;br /&gt;
'''When''': June 25, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Disrupting the power/performance/quality tradeoff through approximate and error-tolerant computing &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:cda@inf.uth.gr cda@inf.uth.gr]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.inf.uth.gr/~cda&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
A major obstacle in the path towards exascale computing is the necessity to improve the energy efficiency of systems by two orders of magnitude. Embedded computing also faces similar challenges, in an era when traditional techniques, such as DVFS and Vdd scaling, yield very limited additional returns.  Heterogeneous platforms are popular due to their power efficiency. They usually consist of a host processor and a number of accelerators (typically GPUs). They may also integrate multiple cores or processors with inherently different characteristics, or even just configured differently. Additional energy gains can be achieved for certain classes of applications by approximating computations, or in a more aggressive setting even tolerating errors. These opportunities, however, have to be exploited in a careful, educated manner, otherwise they may introduce significant development overhead and may also result to catastrophic failures or uncontrolled degradation of the quality of results. Introducing and tolerating approximations and errors in a disciplined and effective way requires rethinking, redesigning and re-engineering all layers of the system stack, from programming models down to hardware.  We will present our experiences from this endeavor in the context of two research projects: Centaurus (co-funded by GR an EU) and SCoRPiO (EU FET-Open). We will also discuss our perspective on the main obstacles preventing the wider adoption of approximate and error-aware computing and the necessary steps to be taken to that end.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Antonopoulos.jpg|frameless|left|100px]]&lt;br /&gt;
'''Bio''': Christos D. Antonopoulos, is Assistant Professor at the Department of Electrical and Computer Engineering of the University of Thessaly in Volos, Greece. He earned his PhD (2004), MSc (2001) and Diploma (1998) from the Department of Computer Engineering and Informatics of the University of Patras, Greece. His research interests span the areas of system and applications software for high performance computing, emphasizing on monitoring and adaptivity with performance and power/performance/quality criteria. He is the author of more than 50 refereed technical papers, and has been awarded two best-paper awards. He has been actively involved in several research projects both in the EU and in USA. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Yongjie Jessica Zhang ==&lt;br /&gt;
Associate Professor in Mechanical Engineering &amp;amp; Courtesy Appointment in Biomedical Engineering&lt;br /&gt;
&lt;br /&gt;
Carnegie Mellon University&lt;br /&gt;
&lt;br /&gt;
'''When''': April 24, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Image-Based Mesh Generation and Volumetric Spline Modeling for Isogeometric Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:jessicaz@andrew.cmu.edu jessicaz@andrew.cmu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.andrew.cmu.edu/~jessicaz&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
With finite element methods and scanning technologies seeing increased use in many research areas, there is an emerging need for high-fidelity geometric modeling and mesh generation of spatially realistic domains.  In this talk, I will highlight our research in three areas: image-based mesh generation for complicated domains, trivariate spline modeling for isogeometric analysis, as well as biomedical, material sciences and engineering applications. I will first present advances and challenges in image-based geometric modeling and meshing along with a comprehensive computational framework, which integrates image processing, geometric modeling, mesh generation and quality improvement with multi-scale analysis at molecular, cellular, tissue and organ scales. Different from other existing methods, the presented framework supports five unique features: high-fidelity meshing for heterogeneous domains with topology ambiguity resolved; multiscale geometric modeling for biomolecular complexes; automatic all-hexahedral mesh generation with sharp feature preservation; robust quality improvement for non-manifold meshes; and guaranteed-quality meshing. These unique capabilities enable accurate, stable, and efficient mechanics calculation for many biomedicine, materials science and engineering applications.&lt;br /&gt;
&lt;br /&gt;
In the second part of this talk, I will show our latest research on volumetric spline parameterization, which contributes directly to the integration of design and analysis, the root idea of isogeometric analysis. For arbitrary topology objects, we first build a polycube whose topology is equivalent to the input geometry and it serves as the parametric domain for the following trivariate T-spline construction. Boolean operations and geometry skeleton can also be used to preserve surface features. A parametric mapping is then used to build a one-to-one correspondence between the input geometry and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement, and applying templates or truncation mechanism couple with subdivision to handle extraordinary nodes. The parametric mapping method has been further extended to conformal solid T-spline construction with the input surface parameterization preserved and trimming curves handled.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Jessica.jpg|frameless|left|120px]]&lt;br /&gt;
'''Bio''': Yongjie Jessica Zhang is an Associate Professor in Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics, all from Tsinghua University, China, and M.Eng. in Aerospace Engineering and Engineering Mechanics, and Ph.D. in Computational Engineering and Sciences from the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element method, isogeometric analysis and their application in computational biomedicine, material sciences and engineering. She has co-authored over 100 publications in peer-reviewed journals and conference proceedings. She is the recipient of Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, USACM Gallagher Young Investigator Award, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, George Tallman Ladd Research Award, and Donald L. &amp;amp; Rhonda Struminger Faculty Fellow. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor David Marcum ==&lt;br /&gt;
Billie J. Ball Professor and  Chief Scientist&lt;br /&gt;
&lt;br /&gt;
Center for Advanced Vehicular Systems, Mechanical Engineering Department, Mississippi State University&lt;br /&gt;
&lt;br /&gt;
'''When''': March 20, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''':  AFLR Unstructured Meshing  Research Activities at CFD Modeling and Simulation Research at the Center for Advanced Vehicular Systems &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:marcum@cavs.msstate.edu marcum@cavs.msstate.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.me.msstate.edu/faculty/marcum/marcum.html &lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
Mesh generation and associated geometry preparation are critical aspects of any computational field simulation (CFS) process. In particular the mesh used can have a significant impact on accuracy, effectiveness, and efficiency of the CFS solver. Further, typical users spend a considerable portion of their time for the overall effort on mesh and geometry issues. All of this is particularly critical for CFD applications.  AFLR is an unstructured mesh generator designed with a focus on addressing these issues for complex geometries. It is widely used, readily available to Government and Academic users, and has been very successful with relevant problems. AFLR volume and surface meshing is also directly incorporated in several systems, including: DoD CREATE-MG Capstone, Lockheed Martin/DoD ACAD, Boeing MADCAP, MSU SolidMesh, and Altair HyperMesh. In this talk we will provide an overview of this technology, future directions, and plans for multi-tasking/parallel operation.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Marcum David.jpg|frameless|left|125px]]&lt;br /&gt;
'''Bio''': Dr. Marcum is Professor of Mechanical Engineering at Mississippi State University (MSU) and Chief Scientist for CFD within the Center for Advanced Vehicular Systems (CAVS). He has 30 years of experience in development of CFD and unstructured grid technology. Before joining MSU in 1991, Dr. Marcum was a Scientist and Senior Engineer at McDonnell Douglas Research Laboratories and Boeing Commercial Airplane Company. He received his Ph.D. from Purdue University in 1985. Prior to that he was a Senior Engineer from 1978 through 1983 at TRW Ross Gear Division. At MSU, Dr. Marcum served as Thrust Leader and Director of the NSF ERC for Computational Field Simulation. As Director, he led the transition from graduated NSF ERC to its current form as the High Performance Computing Collaboratory (HPC²). Dr. Marcum also served as Deputy Director and Director of the SimCenter (an HPC² member center and currently merged within CAVS). He is currently Chief Scientist for CFD within CAVS (also an HPC² member center). As Chief Scientist for CFD, he is directly involved in the research activities of a team of multi-disciplinary researchers working on CFD related projects for DoD, DoE, NASA, NSF, and industry. Computational tools produced by these projects at MSU within the ERC, SimCenter and CAVS, and in particular Dr. Marcum’s AFLR unstructured mesh generator, are in use throughout aerospace, automotive and DoD organizations. Dr. Marcum is widely recognized for his contributions to unstructured grid technology and is currently Honorary Professor at University of Wales, Swansea, UK and a previous Invited Professor at INRIA, Paris-Rocquencourt, France. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Kyle Gallivan ==&lt;br /&gt;
Professor Mathematics Department&lt;br /&gt;
&lt;br /&gt;
Florida State University&lt;br /&gt;
&lt;br /&gt;
'''When''': January 23, 2015, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': Riemannian Optimization for Elastic Shape Analysis &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:kgallivan@fsu.edu kgallivan@fsu.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://www.math.fsu.edu/~gallivan/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
In elastic shape analysis, a representation of a shape is invariant to translation, scaling, rotation and reparameterization and important problems (such as computing the distance and geodesic between two curves, the mean of a set of curves, and other statistical analyses) require finding a best rotation and re-parameterization between two curves. In this talk, I focus on this key subproblem and study different tools for optimizations on the joint group of rotations and re-parameterizations. I will give a brief account of a novel Riemannian optimization approach and evaluate its use in computing the distance between two curves and classification using two public data sets. Experiments show significant advantages in computational time and reliability in performance compared to the current state-of-the-art method.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_marcum.jpg|frameless|left|250px]]&lt;br /&gt;
'''Bio''': Kyle A. Gallivan is a Professor of Mathematics at Florida State University. Gallivan received the Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1983 under the direction of C. W. Gear. He worked on special purpose signal processors in the Government Aerospace Systems Division of Harris Corporation.  He was a research computer scientist at the Center for Supercomputing Research and Development at the University of Illinois from 1985 until 1993 when he moved to the Department of Electrical and Computer Engineering. From 1997 to 2008 he was a member of the Department of Computer Science at Florida State University (FSU) and a member of the Computational Science and Engineering group becoming a full Professor in 1999. He became a Professor of Mathematics at FSU in 2008 and was selected the 2012 Pascal Professor for the Faculty of Sciences of the University of Leiden in the Netherlands. He has been a Visiting Professor at the Catholic University of Louvain in Belgium multiple times through a long-standing research collaboration with colleagues there.&lt;br /&gt;
&lt;br /&gt;
Over the years Gallivan's research has included: design and analysis of high-performance numerical algorithms, pioneering work on block algorithms for numerical linear algebra, performance analysis of the experimental Cedar system, restructuring compilers, model reduction of large scale differential equations, and high-performance codes for application such as ocean circulation, circuit simulation and the codes in the Perfect Benchmark Suite. Gallivan's current main research concerns optimization algorithms on Riemannian manifolds and their use in applications such as shape analysis, statistics, and signal/image processing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professor Suzanne M. Shontz ==&lt;br /&gt;
Department of Electrical Engineering and Computer Science&lt;br /&gt;
&lt;br /&gt;
University of Kansas&lt;br /&gt;
&lt;br /&gt;
'''When''': November 7, 2014, 10:30AM&lt;br /&gt;
&lt;br /&gt;
'''Where''':E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
'''What''': A parallel log barrier for mesh quality improvement and updating &lt;br /&gt;
&lt;br /&gt;
'''Email''': [mailto:shontz@ku.edu shontz@ku.edu]&lt;br /&gt;
&lt;br /&gt;
'''Homepage''': http://people.eecs.ku.edu/~shontz/&lt;br /&gt;
&lt;br /&gt;
'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
There are numerous applications in science, engineering, and medicine which require high-quality meshes, i.e., discretizations of the geometry, for use in computational simulations.  For example, meshes have been used to enable accurate prediction of the performance, reliability, and safety of solid propellant rockets.  The movie industry in Hollywood typically employs dynamic meshes in order to animate characters in films.  Large-scale applications often require meshes with millions to billions of elements that are generated and manipulated in parallel.  The advent of supercomputers with hundreds to thousands of cores has made this possible.&lt;br /&gt;
&lt;br /&gt;
The focus of my talk will be on parallel algorithms for mesh quality improvement and mesh untangling.  Such algorithms are needed, for example, when a large-scale mesh deformation is applied and tangled and/or low quality meshes are the result.  Prior efforts in these areas have focused on the development of parallel algorithms for mesh generation and local mesh quality improvement in which only one vertex is moved at a time.  In contrast, we are concerned with the development of parallel global algorithms for mesh quality improvement and untangling in which all vertices are moved simultaneously. I will present our parallel log-barrier mesh quality improvement and untangling algorithms for distributed-memory machines.  Our algorithms simultaneously move the mesh vertices in order to optimize a log-barrier objective function that was designed to improve the quality of the worst quality mesh elements. We employ an edge coloring-based algorithm for synchronizing unstructured communication among the processes executing the log-barrier mesh optimization algorithm.  The main contribution of this work is a generic scheme for global mesh optimization.  The algorithm shows greater strong scaling efficiency compared to an existing parallel mesh quality improvement technique. Portions of this talk represent joint work with Shankar Prasad Sastry, University of Utah, and Stephen Vavasis, University of Waterloo.&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Visitor_shontz.jpg|frameless|left|150px]]&lt;br /&gt;
'''Bio''':Suzanne M. Shontz is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Kansas. She is also affiliated with the Graduate Program in Bioengineering and the Information and Telecommunication Technology Center.  Prior to joining the University of Kansas in 2014, Suzanne was on the faculty at Mississippi State and Pennsylvania State Universities.  She was also a postdoc at the University of Minnesota and earned her Ph.D. in Applied Mathematics from Cornell University.&lt;br /&gt;
&lt;br /&gt;
Suzanne's research efforts focus centrally on parallel scientific computing, more specifically, the design and analysis of unstructured mesh, numerical optimization, model order reduction, and numerical linear algebra algorithms and their applications to medicine, images, electronic circuits, materials, and other applications.  In 2012, she was awarded an NSF Presidential Early CAREER Award (i.e., NSF PECASE Award) by President Obama for her research in computational- and data-enabled science and engineering.  Suzanne also received an NSF CAREER Award for her research on parallel dynamic meshing algorithms, theory, and software for simulation-assisted medical interventions in 2011 and a Summer Faculty Fellowship from the Office of Naval Research in 2009. She has chaired or co-chaired several top conferences in computational- and data-enabled science and engineering including the International Meshing Roundtable in 2010 and the NSF CyberBridges Workshop in 2012-2014 and has served on numerous program committees in the field.  Suzanne is also an Associate Editor for the Book Series in Medicine by De Gruyter Open. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workshops =&lt;br /&gt;
&lt;br /&gt;
== Parallel Software Runtime System Workshop ==&lt;br /&gt;
&lt;br /&gt;
''' When ''' : 24-25 May 2017&lt;br /&gt;
&lt;br /&gt;
''' Place ''' : NASA/LaRC &amp;amp; NIA&lt;br /&gt;
&lt;br /&gt;
''' Participants ''' : Pete Beckman (ANL), Halim Amer (ANL), Dana P. Hammond (NASA LaRC), Nikos Chrisochoides (ODU), Andriy Kot (NCSA,UIUC), Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU)&lt;br /&gt;
&lt;br /&gt;
== Isotropic Advancing Front Local Reconnection Hands-On Workshop ==&lt;br /&gt;
Attendants: NASA/LaRC : Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond, ODU: Nikos Chrisochoides,  Fotis Drakopoulos (ODU), Thomas Kennedy (ODU), Christos Tsolakis (ODU), Kevin Garner (ODU), Polykarpos Thomadakis (ODU) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When: March 20-21, 2015&lt;br /&gt;
&lt;br /&gt;
== HPC Middleware for Mesh Generation and High Order Geometry Approximation Workshop ==&lt;br /&gt;
Attendants : (NASA/LaRC) Dr Bill Jones , Dr Mike Mark, Dr Dana Hamond,(NIA)  Boris Diskin ODU: Nikos Chrisochoides&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; ''' Dr. Navamita Ray ''' &amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Los Alamos National Laboratory, Mathematics and Computer Science Division &lt;br /&gt;
&lt;br /&gt;
:Los Alamos, New Mexico&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 25,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Towards Scalable Framework for Geometry and Meshing in Scientific Computing &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:navamitaray@gmail.com navamitaray@gmail.com]&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
:High fidelity computational modeling of complex, coupled physical phenomena occurring in several scientific fields require accurate resolution of intricate geometry features, generation of good quality unstructured meshes that minimize modeling errors, scalable interfaces to load/manipulate/traverse these meshes in memory and support I/O for check-pointing and in-situ visualization. While several applications tend to create custom HPC solutions to tackle the heterogeneous descriptions of physical models, such approaches lack in generality, interoperability and extensibility making it difficult to maintain scalability of the individual representations. In this talk, we introduce the component-based open-source '''SIGMA''' (Scalable Interfaces for Geometry and Mesh based Applications) toolkit, an effort to address these issues. We focus particularly on its array-based unstructured mesh representation component, Mesh Oriented datABase ('''MOAB''') that provides scalable interfaces to geometry, mesh and solvers to allow seamless integration to computational workflows. &lt;br /&gt;
:[[File: Navamita.jpg|frameless|left|120px]]Based on the three fundamental units consisting of 1) compact array-based memory management for mesh and field data,2) efficient mesh data structures for traverals and querying, and 3) scalable parallel communication algorithms for distributed meshes, MOAB supports various advanced algorithms such as I/O, in-memory mesh modification and refinement, multi-mesh projections, high-order boundary reconstruction, etc. We discuss some of these advanced algorithms and their applications.&lt;br /&gt;
&lt;br /&gt;
:'''Bio''': Dr. Navamita Ray is a postdoctoral appointee and part of the SIGMA team at Mathematics and Computer Science Division at Argonne National Laboratory, Argonne, IL. She has been involved in research on flexible mesh data structures for mesh adaptivity as well as high-fidelity discrete boundary representation. Dr. Ray holds a Ph.D. in Applied Mathematics from the Stony Brook University, where she did graduate work on high-order surface reconstruction and its applications to surface integrals and remeshing. &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;u&amp;gt; '''Dr. Xiangmin (Jim) Jiao '''&amp;lt;/u&amp;gt;&lt;br /&gt;
:Associate Professor and AMS Ph.D. Program Director, Department of Applied Mathematics and Statistics and Institute for Advanced Computational Science&lt;br /&gt;
&lt;br /&gt;
:Stony Brook University&lt;br /&gt;
&lt;br /&gt;
:'''When''': March 3,2016, 10:30AM&lt;br /&gt;
&lt;br /&gt;
:'''Where''': E &amp;amp; CS Auditorium, First Floor&lt;br /&gt;
&lt;br /&gt;
:'''What''': Robust Adaptive High-Order Geometric and Numerical Methods Based on Weighted Least Squares &lt;br /&gt;
&lt;br /&gt;
:'''Email''': [mailto:xiangmin.jiao@stonybrook.edu xiangmin.jiao@stonybrook.edu]:&lt;br /&gt;
&lt;br /&gt;
:'''Homepage''': http://www.ams.sunysb.edu/~jiao&lt;br /&gt;
&lt;br /&gt;
:'''ABSTRACT'''&lt;br /&gt;
&lt;br /&gt;
:Numerical solutions of partial differential equations (PDEs) are important for modeling and simulations in many scientific and engineering applications. Their solutions over complex geometries pose significant challenges in efficient surface and volume mesh generation and robust numerical discretizations. In this talk, we present our recent work in tackling these challenges from two aspects. First, we will present accurate and robust high-order geometric algorithms on discrete surface, to support high-order surface reconstruction, surface mesh generation and adaptation, and computation of differential geometric operators, without the need to access the CAD models. Secondly, we present some new numerical discretization techniques, including a generalized finite element method based on adaptive extended stencils,and a novel essentially nonoscillatory scheme for hyperbolic conservation laws on unstructured meshes. These new discretizations are more tolerant of mesh quality and allow accurate, stable and efficient computations even on meshes with poorly shaped elements. Based on a unified theoretical framework of weighted least squares, these techniques can significantly simplify the mesh generation processes, especially on supercomputers, and also enable more efficient and robust numerical computations. We will present the theoretical foundation of our methods and demonstrate their applications for mesh generation and numerical solutions of PDEs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
:[[File: Collaborator Jiao.jpg|frameless|left|100px]]&lt;br /&gt;
:'''Bio''': Dr. Xiangmin (Jim) Jiao is an Associated Professor in Applied Mathematics and Computer Science, and also a core faculty member of the Institute for Advanced Computational Science at Stony Brook University. He received his Ph.D. in Computer Science in 2001 from University of Illinois at Urbana-Champaign (UIUC). He was a Research Scientist at the Center for Simulation of Advanced Rockets (CSAR) at UIUC between 2001 and 2005, and then an Assistant Professor in College of Computing at Georgia Institute of Technology between 2005 and 2007. His research interests focus on high-performance geometric and numerical computing, including applied computational and differential geometry, generalized finite difference and finite element methods, multigrid and iterative methods for sparse linear systems, multiphysics coupling, and problem solving environments, with applications in computational fluid dynamics, structural mechanics, biomedical engineering, climate modeling, etc.   &amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CNF Imaging Workshop ==&lt;br /&gt;
&lt;br /&gt;
:'''When''': August 2019&lt;br /&gt;
&lt;br /&gt;
:'''Where''': tbd&lt;br /&gt;
&lt;br /&gt;
:'''More Information''': [[CNF_Imaging_Workshop | CNF Imaging Workshop ]]&lt;br /&gt;
&lt;br /&gt;
= Outreach =&lt;br /&gt;
&lt;br /&gt;
== Surgical Planning Lab ==&lt;br /&gt;
''' When ''' : April 8 &amp;amp; 9 , 2016&lt;br /&gt;
''' Where ''' : Brigham and Women's Hospital &amp;amp; Harvard Medical School, Boston&lt;br /&gt;
&lt;br /&gt;
Posters presented in 25th anniversary of SPL : &lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_CBC3D.pdf Lattice-Based Multi-Tissue Mesh Generation for Biomedical Applications]&lt;br /&gt;
&lt;br /&gt;
Fotis Drakopoulos and Nikos Chrisochoides : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_NRR.pdf Deformable Registration of Pre-Op MRI with iMRI for Brain Tumor Resection: Progress Report]&lt;br /&gt;
&lt;br /&gt;
Nikos Chrisochoides, Andrey Chernikov and Christos Tsolakis : [http://www.cs.odu.edu/crtc/papers/SPL25/Chrisochoides_Telescopic.pdf Extreme Scale Mesh Generation for Big-Data Medical Images]&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4021</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4021"/>
				<updated>2019-10-08T17:47:41Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Gagik Gavalian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is brining together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot; (ii) NASA's LaRC which has long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA) and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian is a Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''' He attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4020</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=4020"/>
				<updated>2019-10-08T17:42:08Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
The Center for Nuclear Femtography (CNF)  High Performance Computing  (HPC)  Mini-Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2pm. The Workshop is expected to be '''highly interactive'''.&lt;br /&gt;
&lt;br /&gt;
Next generation HPC for processing sensor data for Imaging in Nuclear Femtography is entering one of its very early stages. The complexity from seven-dimensional data and many scales and levels of interactions between the colliding particles and what is observed create many challenges. To address these challenges the “Next-generation imaging filters and mesh-based data representation for phase-space calculations in nuclear femtography (CNF19-04)” project proposed to put together an interdisciplinary team to:&lt;br /&gt;
&lt;br /&gt;
* learn lessons from medical image computing community (see '''[[ CNF_Imaging_Workshop | Part I of HPC/Imaging mini-workshop]]''' ) and&lt;br /&gt;
* leverage advanced software systems from Cloud-,  Edge- and Exascale-computing, with the long term aim to enable next-generation process simulations, data analyses, and physics model comparisons&lt;br /&gt;
&lt;br /&gt;
Part II of the CNF series of mini-workshops is brining together HPC leaders on software systems  from ANL and VATech and Computational Fluid Dynamics,  Nondestructive Evaluation, and Computational Materials from NASA/LaRC to build State- and Nation-wide bridges for leveraging Exascale- Cloud- and Edge- computing for CNF activities. &lt;br /&gt;
&lt;br /&gt;
CRTC group in the Computer Science at  ODU is collaborating with two of the most advanced groups world-wide in high-performance computing: (i) Argonne National Labs, namely Mathematical and Computer Science (MCS) Division, which &amp;quot;provides the numerical tools and technology for solving some of our nation’s most critical scientific problems&amp;quot; (ii) NASA's LaRC which has long history in high performance computing  with its former Institute for Computer Applications in Science and Engineering (ICASE) and its evolution to the current National Institute for Aerospace (NIA) and (iii) many Computer Science Departments across Virginia’s Commonwealth like VATech, W&amp;amp;M and VCU. &lt;br /&gt;
&lt;br /&gt;
The long-term goal for such activities is the development of an HPC infrastructure for efficient simulation and analysis of nuclear femtography experiments, allowing users to implement physics models, generate phase space distributions, constrain model parameters with forthcoming experimental data (fits), and share/communicate results. This mini-workshop is the first step towards achieving this goal by exploring the potential of further interdisciplinary collaborations involving in- and out-of-state experts and new computational methods&lt;br /&gt;
&lt;br /&gt;
The Figure bellow depicts preliminary capabilities for imaging CNF data ( top) using  HPC tessellation technologies developed for Medical Image Computing applications and CFD 2030 Vision (bottom). &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: Introduction to Center for Nuclear Femtography  (David)&lt;br /&gt;
* 9:30AM: HPC Activities at JALB (Amber) &lt;br /&gt;
* 9:45AM: NASA/LaRC High Performance Computing Incubator (Cara)&lt;br /&gt;
* 10:00AM Other HPC activities at NASA /LaRC  CM 2040 (Ed) and CFD 2030  Vision (Eric)&lt;br /&gt;
* 10:30AM: Optimistic Cloud &amp;amp; Edge Computing outside Hardware Boundaries (Dimitris)&lt;br /&gt;
* 11:15AM:  Edge-Computing &amp;amp; Exascale-Era OS and computing activities at ANL  (Pete)&lt;br /&gt;
* '''12:00PM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:15PM: CRTC HPC activities for CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing (Christos/Polykarpos)&lt;br /&gt;
* 1:00PM: Next Generation Imaging for CNF (Gagik)&lt;br /&gt;
* 1:30PM Closing Remarks  and Discussion (Moderator: Nikos)&lt;br /&gt;
* 2:15PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== David Richards ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: david_r.jpg|thumb|left|650px| '''David Richards:  Theoretical and Computational Physics at DOE's Jefferson Lab.''']]&lt;br /&gt;
'''Dr. David Richards is  Theoretical and Computational Physics at DOE's Jefferson Lab.''' Richards came to Jefferson Lab as a staff scientist and joint faculty member at Old Dominion University in 1999. He became a full-time staff scientist in 2002 and served as acting Theory Center leader from September 2009 through October 2010. He was appointed deputy director of the Theory Center in mid-October 2010. Richards' current research focus is aimed at garnering a better understanding of so-called &amp;quot;excited states.&amp;quot; These are subatomic particles that were once the familiar protons and neutrons, but now have additional energy. The experimental determination of their masses and properties is an important effort at Jefferson Lab. Richards and his colleagues use supercomputers at Oak Ridge National Lab, and the high-performance GPU-enabled (graphics processing unit) clusters at Jefferson Lab, to compute the masses and properties of these excited states from first principles, using lattice QCD. Comparing these calculations with experimental data provides crucial insights into the nature of matter and how the masses of so-called hadronic matter, such as protons and neutrons, arise from QCD. A particularly exciting recent calculation is that of the masses of so-called &amp;quot;exotic mesons,&amp;quot; mesons that cannot be constructed from straightforward excitations of a quark and an antiquark, the fundamental building blocks of QCD. The search for such mesons is the aim of the GlueX experiment with CEBAF at 12 GeV. Richards and his colleagues predict that there will be exotic mesons at a mass that will be accessible to GlueX, underpinning the scientific imperative for the experiment. Throughout his career, Richards has received numerous awards, including scholarships at Cambridge and an advanced Fellowship at Edinburgh. He serves on committees such as the Lattice QCD Executive Committee and was the co-organizer of Lattice 2008, the 26th International Symposium on Lattice Field Theory held in Williamsburg, and a panel convener for Forefront Questions in Nuclear Science and the Role of High Performance Computing, held in 2009 in Washington, D.C.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gagik Gavalian ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: gagik_gavalian.jpg|thumb|left|250px| '''Gagik Gavalian: Staff Scientist at Jefferson Lab and Assistant Professor at Old Dominion University.''']]&lt;br /&gt;
'''Dr. Gagik Gavalian''' attended Yerevan State University and graduated in 1996 with a&lt;br /&gt;
major in Physics. He obtained his Ph.D. in Nuclear Physics from the University of&lt;br /&gt;
New Hampshire in May 2004. Gagik then served as a Post Doctoral Research&lt;br /&gt;
Associate at Old Dominion University until 2008. He then assumed the role of&lt;br /&gt;
Assistant Professor at Old Dominion until 2014, where he taught introductory&lt;br /&gt;
physics and conducted research at Jefferson Lab. Gagik played an instrumental&lt;br /&gt;
role in the Hall B data mining efforts leading to multiple publications on studies of&lt;br /&gt;
nuclear effects in electron-nucleus scattering. Gagik joined Jefferson Lab as a staff&lt;br /&gt;
scientist in 2014 and has been working on preparing the CLAS12 data analysis&lt;br /&gt;
packages towards expedient analysis. He also mentors Doctoral candidates and&lt;br /&gt;
college students. For past four years Gagik worked on implementing CLAS12&lt;br /&gt;
detector reconstruction packages in cloud distributed CLARA framework. CLAS12&lt;br /&gt;
detector was successfully commissioned in February 2017 with reconstruction&lt;br /&gt;
software successfully tested for full data production. For the past (2017-2018) year&lt;br /&gt;
Gagik was leading effort in development of physics analysis software for CLAS12&lt;br /&gt;
experimental data.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3995</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3995"/>
				<updated>2019-10-08T00:46:40Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Schedule (Draft) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2pm. The Workshop is expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule =&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3994</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3994"/>
				<updated>2019-10-08T00:46:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop will be held at the National Institute of Aerospace ([https://www.nianet.org/ NIA]): '''100 Exploration Way Hampton, VA 23666''' on '''Thursday, October 10th, 2019''' from 9am-2pm. The Workshop is expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Wing solution.png|350px|thumb|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;'''Metric-based adaptation results in laminar flow simulation'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
* Upload presentations here : https://bit.ly/2OspoiN&lt;br /&gt;
* [https://bit.ly/30V3SG2 Presentation Files]&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3977</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3977"/>
				<updated>2019-10-04T01:53:18Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* VA (ODU/JLAB/NASA/LaRC/VaTech) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Amber Boehnlein ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: amber.jpg|thumb|left|350px| '''Amber Boehnlein: Jefferson Lab’s Chief Information Officer''']]&lt;br /&gt;
'''Amber Boehnlein is Jefferson Lab’s Chief Information Officer, responsible for the lab’s Information Technology Division, and the lab’s IT systems, including scientific data analysis, high-performance computing, IT infrastructure and cyber security.''' She completed her Bachelor of Science degree in Physics in 1984 at Miami University followed by a Doctorate in Physics in 1990 at Florida State University. Boehnlein arrived at Jefferson Lab in June 2015 with extensive knowledge, skills and experience from her years at SLAC National Accelerator Laboratory, a Department of Energy appointment, and Fermi National Accelerator Laboratory. She led the Computing Division at SLAC ,from 2011 until accepting her current assignment, where she gained expertise in computational physics relevant to light sources and large scale databases for astrophysics, as well as overseeing the hardware computing systems for the High-Energy Physics (HEP) program. Boehnlein has a particular interest in issues concerning the management and use of research data. She serves on national and international advisory boards in areas related to research computing and particle physics.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:Amber.jpg&amp;diff=3976</id>
		<title>File:Amber.jpg</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:Amber.jpg&amp;diff=3976"/>
				<updated>2019-10-04T01:40:48Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3968</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3968"/>
				<updated>2019-10-02T21:06:24Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* adaptivity and Smoothness to CBC3D [In Progress] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
== Paraview plugin ==&lt;br /&gt;
# CNF project&lt;br /&gt;
# take weight of tetrahedral and plot &lt;br /&gt;
# user should have the ability to choose axis to collect weight &lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3967</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3967"/>
				<updated>2019-10-02T21:06:16Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Adding Adaptivity and Smoothness to CBC3D [In Progress] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* CGAL&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== CGAL ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching CGAL's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
== Paraview plugin ==&lt;br /&gt;
# CNF project&lt;br /&gt;
# take weight of tetrahedral and plot &lt;br /&gt;
# user should have the ability to choose axis to collect weight &lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3966</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3966"/>
				<updated>2019-10-02T21:04:36Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Week One Action Items */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CBC3D Docker [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
# Exploring the option of creating a docker with CBC3D &lt;br /&gt;
# Comparing with PODM which has a different set of parameters&lt;br /&gt;
# Using PODM as a template - making necessary changes to CBC3D&lt;br /&gt;
# Utilizing paraview to visualize meshes&lt;br /&gt;
&lt;br /&gt;
== Adding Adaptivity and Smoothness to CBC3D [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;] ==&lt;br /&gt;
* Cgal&lt;br /&gt;
* NYU method&lt;br /&gt;
&lt;br /&gt;
=== Cgal ===&lt;br /&gt;
# For smooth&lt;br /&gt;
# Researching Cgal's approach for [https://doc.cgal.org/latest/Mesh_3/index.html#fig__mesh3protectionimage3D smoothing]  &lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers: https://arxiv.org/pdf/1908.03581.pdf&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;br /&gt;
&lt;br /&gt;
== Paraview plugin ==&lt;br /&gt;
# CNF project&lt;br /&gt;
# take weight of tetrahedral and plot &lt;br /&gt;
# user should have the ability to choose axis to collect weight &lt;br /&gt;
&lt;br /&gt;
== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] == &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3965</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3965"/>
				<updated>2019-10-02T20:39:41Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* NYU Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Week One Action Items ==&lt;br /&gt;
&lt;br /&gt;
=== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] === &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;br /&gt;
&lt;br /&gt;
=== NYU Method -- [&amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;In Progress&amp;lt;/span&amp;gt;]===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3964</id>
		<title>Tasks</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=Tasks&amp;diff=3964"/>
				<updated>2019-10-02T20:38:27Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Slicer Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Week One Action Items ==&lt;br /&gt;
&lt;br /&gt;
=== Slicer Extension -- [&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;DONE&amp;lt;/span&amp;gt;] === &lt;br /&gt;
# Get stand alone slicer code from github&lt;br /&gt;
# Test the CBC3D Slicer extension with old code&lt;br /&gt;
# Test the CBC3D Slicer extension with new code&lt;br /&gt;
# Place the new code on Box&lt;br /&gt;
&lt;br /&gt;
=== NYU Method ===&lt;br /&gt;
# Read Fotis' thesis&lt;br /&gt;
# Read NYU papers&lt;br /&gt;
# Review the NYU code on github&lt;br /&gt;
# Study the NYU papers and code to understand how to augment Fotis' code&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3963</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3963"/>
				<updated>2019-10-02T19:19:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* External Visitors from NASA Langley */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3962</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3962"/>
				<updated>2019-10-02T19:19:34Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* VA (ODU/JLAB/NASA/LaRC/VaTech) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3961</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3961"/>
				<updated>2019-10-02T19:18:37Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Cara Leckey */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: CaraL.png|thumb|left|350px| '''Cara Leckey: NASA Langley High Performance Computing Incubator Project Lead''']]&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/NASA/LaRC/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:CaraL.png&amp;diff=3960</id>
		<title>File:CaraL.png</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:CaraL.png&amp;diff=3960"/>
				<updated>2019-10-02T19:17:05Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3958</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3958"/>
				<updated>2019-10-02T18:13:20Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Eric Nielsen */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|300px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3957</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3957"/>
				<updated>2019-10-02T18:11:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Eric Nielsen */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|250px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3956</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3956"/>
				<updated>2019-10-02T18:11:02Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Eric Nielsen */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.jpg|thumb|left|350px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3955</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3955"/>
				<updated>2019-10-02T18:10:38Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* Eric Nielsen */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: Eric.png|thumb|left|350px| '''Eric Nielsen: Senior Research Scientist, Computational AeroSciences Branch at NASA Langley Research Center''']]&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=File:Eric.jpg&amp;diff=3954</id>
		<title>File:Eric.jpg</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=File:Eric.jpg&amp;diff=3954"/>
				<updated>2019-10-02T18:09:30Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	<entry>
		<id>https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3953</id>
		<title>CNF HPC Workshop</title>
		<link rel="alternate" type="text/html" href="https://crtc.cs.odu.edu/index.php?title=CNF_HPC_Workshop&amp;diff=3953"/>
				<updated>2019-10-02T18:09:11Z</updated>
		
		<summary type="html">&lt;p&gt;Jbest: /* External Visitors from NASA Langley */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Logo-hpc.png|right|255px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CNF HPC Workshop expected to be '''highly interactive''' as participants will transfer know-how from the high performance computing community to basic physics in this case nuclear femtography.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Cnf pipeline.png|thumb|center|800px|The workflow for creating meshes of phase space data with the software suite residing inside a Docker container. The tessellation data in figure (right) depict a spatial distribution of up quarks as a function of proton's momentum fraction carried by those quarks; bX and bY, spatial coordinates (in 1/GeV = 0.197 fm) defined in a plane perpendicular to the nucleon’s motion, x is the fraction of proton’s momentum and color denotes probability density for finding a quark at given (bX, bY, x). These preliminary data are generated by Dr. Sznajder and processed/tessellated with CRTC's CNF_I2M tool. Their visualization is accomplished by Dr. Gavalian using Paraview.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:NT X min 5 limit 2e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 5e-3 interpolated.png&lt;br /&gt;
File:NT X min 0.5 limit 2e-3 interpolated.png&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Cross-section across the Y plane of the 3D spatial distribution of up quarks (see above)'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot; heights=200px&amp;gt;&lt;br /&gt;
File:Gaussian2 min 100 limit 1e-1 interpolated.png&lt;br /&gt;
File:Gaussian2 min 50 limit 1e-1 interpolated.png &lt;br /&gt;
File:Gaussian2 min 10 limit 1e-1 interpolated.png &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Benchmark of adapted meshes of a Gaussian with two peaks'''&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Schedule (Draft)=&lt;br /&gt;
'''Thursday, October 10th:''' &lt;br /&gt;
&lt;br /&gt;
* 9:00AM: Welcome and Introduction (Nikos)&lt;br /&gt;
* 9:15AM: TBD on:  JLAB's CNF and HPC Activities&lt;br /&gt;
* 9:45AM: Cara/Ed/Eric on:  NASA's HPC and related activities eg. CM 2040 and CFD Vision 2030 &lt;br /&gt;
* 10:15AM: Valerie (TBD: eg. power aware next generation HPC computing)&lt;br /&gt;
* 11:00AM: Pete (TBD: eg. Edge- and Exascale- computing)&lt;br /&gt;
* '''11:45AM: break 15 min. (prep for lunch:$15 lunch upon request can be made available)'''&lt;br /&gt;
** '''Please bring $15 cash if ordering lunch. Lunch will be delivered to the workshop location and will be ordered from Jason’s Deli'''&lt;br /&gt;
* 12:00PM: Dimitris (VATech activities in Edge-Computing)&lt;br /&gt;
* 12:45PM: CRTC HPC activities in CNF, CFD 2030  and RTS by leveraging DoE's ANL Argo OS for exascale computing. &lt;br /&gt;
* 1:15PM: Next Generation Imaging for CNF (Christian/Gagik)&lt;br /&gt;
* 1:45PM Closing Remarks (Nikos)&lt;br /&gt;
* 2:00PM ANL Visitors depart for Airport.&lt;br /&gt;
&lt;br /&gt;
= Presenters =&lt;br /&gt;
== External Visitors from ANL ==&lt;br /&gt;
=== Valerie Taylor ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: valerie.jpg|thumb|left|350px| '''Valerie Taylor: Division Director/ Argonne Distinguished Fellow''']]&lt;br /&gt;
&lt;br /&gt;
'''Valerie Taylor is the director of the Mathematics and Computer Science Division at Argonne National Laboratory.''' She received her Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1991. She then joined the faculty in the Electrical Engineering and Computer Science Department at Northwestern University, where she was a member of the faculty for 11 years. In 2003, Valerie Taylor joined Texas A&amp;amp;M, where she served as head of the computer science and engineering department and senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science. Some of her research interests are high-performance computing, performance analysis and modeling, and power analysis.  Currently, she is focused on the areas of performance analysis, power analysis and resiliency. Valerie Taylor is also a fellow of Institute of Electrical and Electronics Engineer (IEEE) and Association for Computing Machinery (ACM).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pete Beckman ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: pete.jpeg|thumb|left|350px| '''Pete Beckman: Co-Director, Northwestern Argonne Institute of Science and Engineering''']]&lt;br /&gt;
&lt;br /&gt;
'''Pete Beckman is the co-director of the Northwestern-Argonne Institute for Science and Engineering.''' Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985). He is a recognized global expert in high-end computing systems and has designed and built software and architectures for large-scale parallel and distributed computing systems during the past 25 years. Beckman helped found Indiana University’s Extreme Computing Laboratory. He also founded the Linux cluster team at the Advanced Computing Laboratory, Los Alamos National Laboratory and a Turbolinux-sponsored research laboratory that developed the world’s first dynamic provisioning system for cloud computing and HPC clusters. Furthermore, Pete Beckman became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea, and Slovenia. He joined Argonne National Laboratory in 2002. As director of engineering and chief architect for the TeraGrid, he designed and deployed the world’s most powerful Grid computing system for linking production high performance computing centers for the National Science Foundation. He served as director of the Argonne Leadership Computing Facility from 2008 to 2010. He is currently a Senior Computer Scientist and Co-Director of the Northwestern Argonne Institute of Science and Engineering. Pete is also a co-founder of the International Exascale Software Project (IESP).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External Visitors from NASA Langley ==&lt;br /&gt;
=== Eric Nielsen ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Eric Nielsen is a Senior Research Scientist with the Computational AeroSciences Branch at NASA Langley Research Center in Hampton, Virginia.''' He received his PhD in Aerospace Engineering from Virginia Tech and has worked at Langley for the past 25 years. Dr. Nielsen specializes in the development of computational aerodynamics software for the world's most powerful computer systems.  The software has been distributed to thousands of organizations around the country and supports major national research and engineering efforts at NASA, in industry, academia, the Department of Defense, and other government agencies. He has published extensively on the subject and has given presentations around the world on his work.  Dr. Nielsen is a recipient of NASA's Exceptional Achievement and Exceptional Engineering Achievement Medals as well as NASA Langley's HJE Reid Award for best research publication.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cara Leckey ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
'''Dr. Cara Leckey currently leads the NASA Langley High Performance Computing Incubator Project and serves as the Assistant Branch Head in the Nondestructive Evaluation Sciences Branch.''' Since joining NASA in 2010, her research has focused on computational nondestructive evaluation. She also serves as an Associate Technical Editor for the journals Materials Evaluation and Research in NDE. Cara received her Ph.D. in physics from the College of William and Mary in 2011.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== VA (ODU/JLAB/VaTech)==&lt;br /&gt;
&lt;br /&gt;
=== Dimitrios Nikolopoulos ===&lt;br /&gt;
&amp;lt;li style=&amp;quot;display: inline-block;&amp;quot;&amp;gt;&lt;br /&gt;
[[File: dimitrios.jpg|thumb|left|350px| '''Dimitrios Nikolopoulos: Professor of Engineering at Virginia Tech''']]&lt;br /&gt;
&lt;br /&gt;
'''Dimitrios Nikolopoulos is a Professor of Engineering and he was recently named the John W. Hancock Professor of Engineering by the Virginia Tech Board of Visitors.''' He received his bachelor’s degree, master’s degree, and Ph.D. from the University of Patras. He spent the past 10 years in Europe, most recently as a professor of high performance and distributed computing and director of the Institute on Electronics, Communications, and Information Technology at Queen's University Belfast. He brings to Virginia Tech a world-class record of scholarship, teaching, service, and outreach. Through fundamental scholarship on computer systems, Nikolopoulos has made contributions to the global computing systems research community. He has published 55 peer-reviewed journal articles and 122 peer-reviewed papers in highly regarded archival conference proceedings. Nikolopoulos has advised or co-advised 22 Ph.D. students through completion and continues to advise six Ph.D. students. He has also advised 16 postdoctoral research fellows. He is a Distinguished Member of the Association for Computing Machinery (ACM) and is a recipient of a Royal Society Wolfson Research Merit Award, a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and an IBM Faculty Award.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jbest</name></author>	</entry>

	</feed>