Azure high-performance computing is a complete set of computing, networking, and storage services integrated with workload orchestration resources to support the widest range of HPC applications.
Azure offers HPC engineers, data scientists and researchers with both competitive price/performance against on premises and ease of use through a simple, consolidated interface. When combined with Microsoft’s world-leading AI service APIs, Azure HPC empowers you to produce more intelligent outcomes faster than you ever thought possible.
From the world’s largest supercomputers to the vast data centers that power the cloud, high performance computing is an essential tool fueling the advancement of science. NVIDIA’s full-stack computing and networking platforms are helping answer complex questions across various industries utilizing the power of HPC, allowing users to make new discoveries. Industries such as healthcare, energy, public sector, higher education and research, financial services, smart cities, manufacturing – are all tapping into the power of NVIDIA platforms to accelerate their innovations.
Learn how Microsoft Azure continues to be the only cloud that offers purpose-built AI supercomputers, facilitating your machine learning workloads - from the earliest stages of development through to enterprise-grade production deployments that require full-blown MLOps.
Sherry Wang is product management lead living in Seattle, Washington. She has been developing and launching advanced technologies products for nearly 12 years and has worked at Meta, Qualcomm, and others. She currently works as a Senior Program Manager at Microsoft and leads Azure GPU optimized virtual machine product development for AI workload.
The session will discuss the latest development around InfiniBand In-Network Computing technology and testing results from several leading supercomputers worldwide. It will also cover the integration of In-Network Computing into various programming models. The InfiniBand Trade Association has set the goals for future speeds, and this topic will also be covered.
DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 500 papers. The MVAPICH2 MPI libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 3,200 organizations worldwide (in 89 countries). More than 1.57M downloads of this software have taken place from the project's site. This software is empowering many clusters in the TOP500 list. High-performance and scalable solutions for Deep Learning frameworks and Machine Learning applications are available from https://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow and recipient of the 2022 IEEE Charles Babbage Award. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.
Gilad Shainer serves as senior vice-president of networking at NVIDIA, focusing on high- performance computing, artificial intelligence and the InfiniBand technology. Mr. Shainer joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles since 2005 at Mellanox and now NVIDIA. Mr. Shainer serves as the chairman of the HPC-AI Advisory Council organization, the president of UCF and CCIX consortiums, a member of IBTA and a contributor to the PCISIG PCI-X and PCIe specifications. Mr. Shainer holds multiple patents in the field of high-speed networking. He is a recipient of 2015 R&D100 award for his contribution to the CORE-Direct In-Network Computing technology and the 2019 R&D100 award for his contribution to the Unified Communication X (UCX) technology. Gilad Shainer holds a MSc degree and a BSc degree in Electrical Engineering from the Technion Institute of Technology in Israel.
Daniel Gruner is the CTO at SciNet High Performance Computing Consortium and has more than twenty years’ experience working with a variety of programming languages, parallel computing, scientific modeling, software architecture, and administration and architecture of large Beowulf clusters and large shared-memory parallel computers. He has a doctorate in chemical physics from the University of Toronto.
Jithin Jose is a principal software engineer at Microsoft Azure HPC. His work is focused on co-design of software and hardware building blocks for high performance computing platform, and designing communication runtimes that seamlessly expose hardware capabilities to programming models and middleware. Jithin's research interests include high-performance interconnects and protocols, parallel programming models, and cloud computing. He has published more than 25 papers in major peer-reviewed journals and international conferences related to these areas. Jithin received his Ph.D. in computer science from The Ohio State University.
Technology only matters if it can make a difference in peoples lives. Read how HPC+AI is impacting organizations and their ability to innovate.
Wildlife Protection Solutions helps protect the wildest places with Microsoft AI for Earth
Van Gogh Museum creates bespoke portraits in WeChat with Azure
Dudek uses cloud workstations to dramatically reduce drone image processing time, speed emergency response (GPU)
Ventilators have become a scarce resource as the COVID-19 pandemic spreads around the world. Professor Randles and her team at Duke University rushed to find a safe and easy way to effectively split a single ventilator between two patients. They partnered with Microsoft AI for Health to run over 800,000 computer hours in just 36 hours to build a life-saving solution.
Take a look under the hood to understand more specifically how HPC+AI can help you achieve your vision.
Learn how a unique partnership between a cloud provider and a silicon vendor is stepping up to meet a long unfilled need for the evolution of AI and HPC in this exclusive white paper
Learn more about a new class of large-scale AI models enabling next-generation AI
Learn more about our massively scalable AI VM
From Azure Machine Learning, to RocketML and NVIDIA GPU Clusters, check out some of the stories below to see how Microsoft Azure and NVIDIA are revolutionizing HPC in the workplace
Learn more about faster valuation of derivatives
Learn More about RocketML Deep Learning
Learn more about Microsoft Healthcare, Tools, and Partnership
See how some industries are leveraging Cloud and GPUs to solve real-world problems
Providing medical personnel with detailed information to begin treatment swiftly
Whiteboard coordinator deploys GPU-powered systems to fend off the coronavirus
An extremely fast model without sacrificing accuracy