oneAPI in the News
Get up to date on the latest news on oneAPI, check out podcasts and share news.
oneAPI in the News
Get up to date on the latest news on oneAPI, check out podcasts and share news.
Today, as part of the oneAPI industry initiative, we released additional open source math interfaces. The goal of open-sourcing the oneAPI Math Kernel Library (oneMKL) interface is to address the lack of an industry-standard interface and provide a single, cross-architecture API for CPUs and accelerators.
The new collaboration puts teeth into Intel’s promises of hardware-agnostic AI.
After more than a year in beta, Intel has finished version 1.0 of its unified programming model oneAPI.
A oneAPI Academic Center of Excellence (CoE) is now established at the Heidelberg University Computing Center (URZ). The new CoE will conduct research supporting the oneAPI industry initiative to create a uniform, open programming model for heterogeneous computer architectures.
While general purpose processors are still the backbone of the world’s computing infrastructure, accelerators are becoming more mainstream because they provide distinct advantages for certain classes of workloads. But this growth in usage often comes attached to a fully proprietary software stack that locks applications into a single vendor solution and requires significant effort to port to each additional system. To democratize accelerator programming, the industry needs a standards-based solution that allows for both portability and close-to-the-metal programming interface for peak performance.
After announcing oneAPI at the end of 2018 and then going into beta last year, oneAPI 1.0 is now official for this open-source, standards-based unified programming model designed to support Intel’s range of hardware from CPUs to GPUs to other accelerators like FPGAs. Intel’s oneAPI initiative has been one of several exciting software efforts led by the company in recent years while continuing to serve as one of the world’s largest contributors to open-source software.
Using oneAPI for Reconstruction algorithms
The latest Intel oneAPI software release is a new monthly update to their LLVM-based oneAPI Data Parallel C++ (DPC++) compiler.
Intel’s oneAPI DPC++ Compiler 2020-09 release now defaults to the SYCL 2020 standard, USM address spaces are now enabled by default for FPGAs, a dead argument elimination optimization has been added, support for union types as kernel parameters, and other SYCL compiler improvements
Along with this week marking the release of oneAPI Level Zero 1.0, the oneAPI Data Parallel C++ compiler has seen its newest tagged release.
The Intel oneAPI DPC++ Compiler is the company’s LLVM-based compiler around their Data Parallel C++ initiative for oneAPI built atop Khronos’ SYCL single source programming standard and ISO C++.
It looks like Intel will soon be tagging their oneAPI Level Zero specification as version 1.0.
At the end of last year Intel published the oneAPI Level Zero specification as a low-level API for direct-to-metal interfaces for offload accelerators like FPGAs and GPUs. In the months since they have continued advancing the Level Zero interface and implementation within the Intel software stack (along with the other oneAPI components at large) while it’s looking like Level Zero v1.0 is around the corner.
A few days back we wrote of Intel’s ISPC compiler landing GPU code generation support for their UHD/Iris/Xe Graphics from Gen9 Skylake and beyond. Following that code being merged, ISPC 1.14.0 was quickly tagged. Intel ISPC 1.14.0 was released shortly after the GPU support code landed for the Implicit SPMD Program Compiler. See more details on the GPU code landing in the aforelinked article. It’s an exciting milestone and another great Intel software achievement playing into their oneAPI efforts.
Besides the code itself to Intel’s oneAPI being open-source, the company is being surprisingly open about its support even for areas of usage outside of x86_64 CPUs. In addition to the likes of getting Intel oneAPI / Data Parallel C++ on NVIDIA GPUs and other “open” efforts around APIs, they have shown willingness to see different oneAPI components working on non-x86_64 architectures.
Intel is the prime vendor for the first US exascale supercomputer, the Aurora system, scheduled for delivery in 2021 at Argonne National Lab. The late Rich Brueckner of insideHPC caught up with Intel’s senior principal engineer and chief architect for HPC, Robert Wisniewski, to learn more.
Intel’s open-source Compute Runtime stack for providing OpenCL and oneAPI Level Zero support for their graphics hardware has now rolled out support for the DG1 Xe discrete graphics card. Building off the DG1 support that has materialized for the Linux kernel and other components, most recently the IGC graphics compiler now supporting DG1, today’s release of the Intel Compute Runtime has DG1 support in place.
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification. oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
At the start of this week’s ISC High Performance conference, the Swedish e-Science Research Center (SeRC) is delighted to announce that it is Intel’s first oneAPI academic Center of Excellence (COE).
Intel’s oneAPI crew just released version 2020-03 (though one would have thought it should be 2020-05) of their Data Parallel C++ (DPC++) compiler and with this release are several new features including the NVIDIA CUDA back-end.
In this Code Together podcast, Nicole Huesman hosts Alice Chan from Intel and Hal Finkel from Argonne National Lab to discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning.
Data Parallel C++ (DPC++) is a high-level language designed for data parallel programming productivity. Get the essentials, including hands-on practice, in this self-guided training course within the Intel® DevCloud for oneAPI.
Codeplay has made significant contributions to enabling an open standard, cross-architecture interface for developers as part of the oneAPI industry initiative.
Software developers are looking more than ever at how they can accelerate their applications without having to write optimized processor specific code.
Intel engineers have outed a new version of oneDNN, the library formerly known as DNNL and before that MKL-DNN for providing a deep neural network library geared for high performance deep learning applications.
In this guest blog, Michael Wong, Chair of the SYCL Working Group and Vice President of Research and Development at Codeplay Software Ltd, reflects on the evolution of SYCL in the past two years.
Intel’s open-source teams have been issuing a slew of new packages in recent days.
The Intel Graphics Compiler (IGC) and now in turn the Intel Compute Runtime have updated their compiler stack against the newly released LLVM Clang 10.0.
GPUs offer the promise of tremendous compute power for HPC applications (like AI and DL/ML) … of which a majority are developed to run only on high-end CPUs. So how does a developer run AI apps on both CPU and Xe GPU platforms?
The oneAPI specification v0.7 has been released, which defines the programming interface for core elements of oneAPI, including the DPC++ compiler, libraries, and Level Zero driver. This latest release includes several enhancements to DPC++ including 10 new language extensions, as well as updates to many of the libraries, among other improvements.
Last week Intel released an initial set of micro-benchmarks for their oneAPI Level Zero and with L0 support being plumbed into their open-source Intel Compute Runtime, this weekend I started toying around with some Level Zero benchmarks on a variety of Intel processors.
To address the lack of an industry-standard interface for math libraries and provide a single, cross-architecture API for CPUs and accelerators, Intel released the oneAPI Math Kernel Library (oneMKL) open source interface.
Programming languages are a dime a dozen; throw a rock in any direction and you’ll hit one. Question is … can you use any of them to program data-centric applications that are deployable across CPUs, GPUs, FPGAs, and AI accelerators? You can now.
Today Intel introduced the oneAPI DevCloud to make it easier and more productive for coders currently working from home.
As part of its Virtual Game Developers Conference (GDC) 2020, Intel has put a presentation online detailing the features of its oneAPI Rendering Toolkit that are applicable for games. These libraries include Embree, OSPRay, Open VKL, OpenSWR and Open Image Denoise. Intel also announced that some will receive GPU support soon.
In this podcast, the Radio Free HPC team looks at Intel’s oneAPI project.
Intel’s open-source Compute Runtime for OpenCL and now oneAPI support on Linux has added oneAPI Level Zero support. Read more
Intel has added bare-metal oneAPI support to its open-source Graphics Compute Runtime for OpenCL and oneAPI, according to a Phoronix report on Monday. This brings oneAPI Level Zero to Linux.
In this video from the Intel HPC Developer Conference, Bill Savage from Intel presents: oneAPI: Single Programming Model to Deliver Cross-Architecture Performance. Read More
At The Next FPGA Platform event in San Jose, California on January 22, Jose Alvarez, Intel PSG CTO, Jose Alvarez outlined the three levels of heterogeneous integration. Read More
In this article, we’ll dive into the newly announced oneAPI, a single, unified programming model that aims to simplify development across multiple architectures, such as CPUs, GPUs, FPGAs and other accelerators. Read More
Writing software to run efficiently on today’s heterogeneous compute architectures is an ongoing challenge made increasingly difficult by the growing number of processor and accelerator choices. Read More
Codeplay has been a part of the SYCL™ community from the beginning, and our team has worked with peers from some of the largest semiconductor vendors including Intel and Xilinx for the past 5 years to define the SYCL standard. Read More
The Khronos SYCL standard as a single-source C++-based programming model for OpenCL is one of the exciting elements for Intel’s GPU compute plans with the forthcoming Xe graphics cards and fits into their oneAPI umbrella. Read More
The SYCL programming model from Khronos is a single-source C++ open-standard programming model for programming heterogeneous systems. Read More
Moving an application to a new processor type or chip vendor means creating an entirely new code base. Read More