High-Performance Computing + AI

Azure high-performance computing is a complete set of computing, networking, and storage services integrated with workload orchestration resources to support the widest range of HPC applications.

Unlock Your Innovation

Azure offers HPC engineers, data scientists and researchers with both competitive price/performance against on premises and ease of use through a simple, consolidated interface. When combined with Microsoft’s world-leading AI service APIs, Azure HPC empowers you to produce more intelligent outcomes faster than you ever thought possible.

May 29 – June 2, 2022 | Hamburg, Germany
ISC High Performance 2022 (ISC22)

From the world’s largest supercomputers to the vast data centers that power the cloud, high performance computing is an essential tool fueling the advancement of science. NVIDIA’s full-stack computing and networking platforms are helping answer complex questions across various industries utilizing the power of HPC, allowing users to make new discoveries. Industries such as healthcare, energy, public sector, higher education and research, financial services, smart cities, manufacturing – are all tapping into the power of NVIDIA platforms to accelerate their innovations.

Understanding AI + AI Infrastructure

Learn how Microsoft Azure continues to be the only cloud that offers purpose-built AI supercomputers, facilitating your machine learning workloads - from the earliest stages of development through to enterprise-grade production deployments that require full-blown MLOps.


May 31, 2022 AT 11:30am (CET) | BOF SESSION | HALL E
InfiniBand In-Network Computing Technology and Roadmap

The session will discuss the latest development around InfiniBand In-Network Computing technology and testing results from several leading supercomputers worldwide. It will also cover the integration of In-Network Computing into various programming models. The InfiniBand Trade Association has set the goals for future speeds, and this topic will also be covered.

Dhabaleswar Panda
Ohio State University

DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 500 papers. The MVAPICH2 MPI libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 3,200 organizations worldwide (in 89 countries). More than 1.57M downloads of this software have taken place from the project's site. This software is empowering many clusters in the TOP500 list. High-performance and scalable solutions for Deep Learning frameworks and Machine Learning applications are available from https://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow and recipient of the 2022 IEEE Charles Babbage Award. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.

Gilad Shainer

Gilad Shainer serves as senior vice-president of networking at NVIDIA, focusing on high- performance computing, artificial intelligence and the InfiniBand technology. Mr. Shainer joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles since 2005 at Mellanox and now NVIDIA. Mr. Shainer serves as the chairman of the HPC-AI Advisory Council organization, the president of UCF and CCIX consortiums, a member of IBTA and a contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is a recipient of 2015 R&D100 award for his contribution to the CORE-Direct In-Network Computing technology and the 2019 R&D100 award for his contribution to the Unified Communication X (UCX) technology. Gilad Shainer holds a MSc degree and a BSc degree in Electrical Engineering from the Technion Institute of Technology in Israel.

Daniel Gruner
University of Toronto

Daniel Gruner is the CTO at SciNet High Performance Computing Consortium and has more than twenty years’ experience working with a variety of programming languages, parallel computing, scientific modeling, software architecture, and administration and architecture of large Beowulf clusters and large shared-memory parallel computers. He has a doctorate in chemical physics from the University of Toronto.

Jithin Jose

Jithin Jose is a principal software engineer at Microsoft Azure HPC. His work is focused on co-design of software and hardware building blocks for high performance computing platform, and designing communication runtimes that seamlessly expose hardware capabilities to programming models and middleware. Jithin's research interests include high-performance interconnects and protocols, parallel programming models, and cloud computing. He has published more than 25 papers in major peer-reviewed journals and international conferences related to these areas. Jithin received his Ph.D. in computer science from The Ohio State University.

Case Studies
HPC+AI Making a Difference

Technology only matters if it can make a difference in peoples lives. Read how HPC+AI is impacting organizations and their ability to innovate.

Wildlife Protection Solutions with Microsoft AI for Earth

Wildlife Protection Solutions helps protect the wildest places with Microsoft AI for Earth

Van Gogh Museum with Azure

Van Gogh Museum creates bespoke portraits in WeChat with Azure

Dudek Cloud Workstations Case Study

Dudek uses cloud workstations to dramatically reduce drone image processing time, speed emergency response (GPU)

Rising to the Challenge
Duke University Ventilator Splitting Project

Ventilators have become a scarce resource as the COVID-19 pandemic spreads around the world. Professor Randles and her team at Duke University rushed to find a safe and easy way to effectively split a single ventilator between two patients. They partnered with Microsoft AI for Health to run over 800,000 computer hours in just 36 hours to build a life-saving solution.

White Papers
Deep Dive into HPC+AI Architecture

Take a look under the hood to understand more specifically how HPC+AI can help you achieve your vision.

Featured white paper

Learn how a unique partnership between a cloud provider and a silicon vendor is stepping up to meet a long unfilled need for the evolution of AI and HPC in this exclusive white paper

Retail Meets AI

Learn more about a new class of large-scale AI models enabling next-generation AI

Unprecedented Scale

Learn more about our massively scalable AI VM

Customer Stories
Real HPC+AI Stories Happening Today

From Azure Machine Learning, to RocketML and NVIDIA GPU Clusters, check out some of the stories below to see how Microsoft Azure and NVIDIA are revolutionizing HPC in the workplace

Real-Time Inference on GPUs with Azure Machine Learning and Triton

Learn more about faster valuation of derivatives

RocketML on Azure NVIDIA GPU Clusters

Learn More about RocketML Deep Learning

Inception Healthcare

Learn more about Microsoft Healthcare, Tools, and Partnership