oneAPI in the News
Get up to date on the latest news on oneAPI, check out podcasts and share news.
oneAPI in the News
Get up to date on the latest news on oneAPI, check out podcasts and share news.
Intel announced the winners of the Great Cross-Architecture Challenge, a collaboration with the European Organization for Nuclear Research (CERN) and Argonne National Laboratory, and run by CodeProject.
Released this week was Intel’s open-source oneAPI Level Zero v1.2.3 for the headers and loader based on their public oneAPI Level Zero specification.
The University of Heidelberg, a oneAPI Center of Excellence posted a blog about the hipSYCL 0.9.0 release. hipSYCL 0.9.0 incorporates features from the SYCL 2020 specification, which were innovated in oneAPI’s Data Parallel C++, which provides a unified cross-architecture and cross-vendor programming model.
A set of Advanced Ray Tracing APIs are being made available for comment and inclusion in the oneAPI Specification. The rapid growth of ray tracing compute across film, scientific visualization, design, and gaming suggests adding these APIs to the oneAPI specification for XPU architectures will help foster robust and efficient development in this area.
Out today is a new release of Intel’s open-source oneDNN library used as a deep neural network library for assembling deep learning applications.
Among Intel’s many open-source software accomplishments for 2020 was introducing OSPray Studio as part of oneAPI. OSPray Studio builds atop the existing OSPray ray-tracing engine and inter-connected oneAPI Rendering Toolkit components to offer an open-source scene graph application for interactive visualizations and ray-tracing based rendering.
The number of CPUs in a server is growing, and so is the number of vendors that make those processors.
SYCL is an open industry standard for programming a heterogeneous system. The design of SYCL allows standard C++ source code to be written such that it can run on either an heterogeneous device or on the host.
In the series discussions, we’ll highlight achievements in optimizing code to run on GPUs and provide developers with lessons learned to help them overcome any initial hurdles.
This first episode focuses on preparing an earthquake risk assessment application for exascale computing and features Houjun Tang of Lawrence Berkeley National Laboratory and Brian Homerding of the Argonne Leadership Computing Facility (ALCF).
The modern deep learning models are growing at an exponential rate, and those latest models could grow their parameters from million to billions. To train those modern models within hours, distributed training is a better option for those big models.
The Academy of Motion Picture Arts and Sciences has awarded a Scientific and Technical Achievement award to Intel’s Embree ray tracing library, in recognition of its role as a contributing innovation in the moviemaking process.
Intel and Facebook previously collaborated to enable BF16, a first-class data type in PyTorch. It supports basic math and tensor operations and adds CPU optimization with multi-threading, vectorization, and neural network kernels from oneAPI Deep Neural Network Library (oneDNN, formerly known as MKL-DNN).
Intel oneAPI is a massive collection of very high quality developer tools, and, it’s free to use! OneAPI is Intel’s new developer ecosystem. This is something that Intel has been working on for a few years and it went “gold” with release 1.0 at the end of 2020.
Investigations, conducted together with scientists at CERN, show promising results – with breakthrough performance – in their pursuit of faster Monte Carlo based simulations, which are an important part of many scientific, engineering, and financial applications.
Intel’s open-source oneAPI Data Parallel C++ compiler saw a Christmas Day update with the 2020-12 monthly update.
Intel’s Deep Neural Network Library currently known as oneDNN as part of the oneAPI suite (and formerly known as MKL-DNN and DNNL) has reached version 2.0 as an open-source project.
The open-source Intel Graphics Compiler (IGC) that is currently used by their oneAPI Level Zero and OpenCL implementations but likely to see Intel driver Mesa usage in 2021 has a new feature dubbed “IMF LA” that aims to help with the performance and close the gap with Windows.
Following a November announcement, Intel today released production of Intel oneAPI toolkits for developing high-performance, cross-architecture applications for Intel CPUs, GPUs and FPGAs, collectively described as XPUs.
Applications that make use of deep learning processes (hereinafter called “DL processes”) normally consists of a software stack formed from two layers: a framework layer and a library layer (as shown below). When a user wants to run an application that uses a DL process, they use an API provided by the framework to define the neural network for the process to run and to describe processing details.
This tutorial provides hands-on experience programming CPUs, GPUs and FPGAs using a unified, standards-based programming model: oneAPI. oneAPI includes a cross-architecture language: Data Parallel C++ (DPC++). DPC++ is an evolution of C++ that incorporates the SYCL language with extensions for Unified Shared Memory (USM), ordered queues and reductions, among other features. oneAPI also includes libraries for API-based programming, such as domain-specific libraries, math kernel libraries and Threading Building Blocks (TBB).
BittWare, a Molex company, has launched the IA-840F, its first Intel Agilex-based FPGA card designed to deliver significant performance-per-watt improvements for next-generation data centre, networking and edge compute workloads.
Intel and Argonne National Laboratory said they will co-design and validate exascale-class applications using the chipmaker’s GPUs, microarchitecture and developer tools.
Bittware today unveiled the IA-840F, the company’s Intel Agilex-based FPGA card designed to deliver performance-per-watt improvements for data center, networking and edge compute workloads.
Intel today announced key milestones in its multiyear journey to deliver a mix of architectures with a unified software experience. The company announced the gold release of Intel® oneAPI toolkits coming in December, and new capabilities in its software stack as part of the Intel’s combined hardware and software design approach.
Jeff McVeigh, vice president of data center XPU products and solutions Intel, said the goal is to provide a standard application programming interface (API) for accessing the compute capabilities of any processor regardless of what company may have manufactured it.
Intel made several announcements as part of its unveil of OneAPI Gold and a new GPU product intended for the video processing / Android cloud gaming market.
Ayer fue un día muy importante para el gigante del chip, no solo por la presentación de la Intel Server GPU, sino porque además la compañía de Santa Clara mostró al mundo su nueva visión frente a la CPU, GPU, FPGA y otros aceleradores importantes dentro del sector profesional. Intel cree que es el momento de abandonar la concepción individualista y saltar, de lleno, a una mezcla de arquitecturas que permita crear soluciones potentes, versátiles y funcionales.
Intel has debuted its first discrete graphics processing unit (GPU) for the data centre, Intel Server GPU, which is based on the Xe-LP architecture and is designed specifically for high-density, low-latency Android cloud gaming and media streaming.
Intel pulled the wraps off its ‘Intel Server GPU’ today, unveiling a discrete graphics card for servers manufactured by its partner H3C. The new card consists of four separate Iris Xe Max discrete graphics chips, formerly codenamed DG1, that are also used as discrete GPUs in laptops.
Intel took a step closer to realizing its XPU vision with the announcement of its oneAPI, first teased at Super Computing 2019, and the launch of its first data center-focused GPU today.
Intel is giving us a whole host of new server goodies from GPUs to software stacks. Normally SemiAccurate focuses on the hardware but the today the software is of higher import.
Intel today is announcing their Server GPU for the data center based on their Xe-LP microarchitecture with an initial focus on high-density, low-latency Android cloud gaming and media streaming.
Intel today provided greater detail around its plans to bring a full line of GPUs (Xe) and associated programming environment to market. The biggest news from an HPC perspective was introduction of oneAPI Gold, the first productized version of Intel’s programming platform for the Xe GPU line.
High performance computing has never had a better—nor a tougher— year. Computing centers and data scientists have joined the fight against COVID-19—from decoding the virus’s RNA to modeling its transmission and accelerating treatment options.
We’re proud to announce the first version of oneAPI.jl, a Julia package for programming
accelerators with the oneAPI programming model. It is currently
available for select Intel GPUs, including common integrated ones, and offers a similar
experience to CUDA.jl.
This past week Intel began adding Alder Lake support to their Linux graphics driver and that also continued on the compute side with the Intel Compute-Runtime receiving initial support for Alder Lake S “ADLS” too.
Since picking up the Dell XPS 13 9310 for delivering Tiger Lake Linux benchmarks, most of the focus so far has been about the overall processor performance while in this article is our first deep dive into the Gen12 Xe Graphics performance on Linux with Intel’s fully open-source graphics and compute stack.
Intel issued a notable open-source Compute Runtime stack update today that provides OpenCL and oneAPI Level Zero support for the company’s graphics processors from Xe/Gen12 graphics back through Gen8 Broadwell hardware.
Today, as part of the oneAPI industry initiative, we released additional open source math interfaces. The goal of open-sourcing the oneAPI Math Kernel Library (oneMKL) interface is to address the lack of an industry-standard interface and provide a single, cross-architecture API for CPUs and accelerators.
The new collaboration puts teeth into Intel’s promises of hardware-agnostic AI.
After more than a year in beta, Intel has finished version 1.0 of its unified programming model oneAPI.
CERN openlab Summer Student programme 2020
A oneAPI Academic Center of Excellence (CoE) is now established at the Heidelberg University Computing Center (URZ). The new CoE will conduct research supporting the oneAPI industry initiative to create a uniform, open programming model for heterogeneous computer architectures.
While general purpose processors are still the backbone of the world’s computing infrastructure, accelerators are becoming more mainstream because they provide distinct advantages for certain classes of workloads. But this growth in usage often comes attached to a fully proprietary software stack that locks applications into a single vendor solution and requires significant effort to port to each additional system. To democratize accelerator programming, the industry needs a standards-based solution that allows for both portability and close-to-the-metal programming interface for peak performance.
After announcing oneAPI at the end of 2018 and then going into beta last year, oneAPI 1.0 is now official for this open-source, standards-based unified programming model designed to support Intel’s range of hardware from CPUs to GPUs to other accelerators like FPGAs. Intel’s oneAPI initiative has been one of several exciting software efforts led by the company in recent years while continuing to serve as one of the world’s largest contributors to open-source software.
Using oneAPI for Reconstruction algorithms
The latest Intel oneAPI software release is a new monthly update to their LLVM-based oneAPI Data Parallel C++ (DPC++) compiler.
Intel’s oneAPI DPC++ Compiler 2020-09 release now defaults to the SYCL 2020 standard, USM address spaces are now enabled by default for FPGAs, a dead argument elimination optimization has been added, support for union types as kernel parameters, and other SYCL compiler improvements
Along with this week marking the release of oneAPI Level Zero 1.0, the oneAPI Data Parallel C++ compiler has seen its newest tagged release.
The Intel oneAPI DPC++ Compiler is the company’s LLVM-based compiler around their Data Parallel C++ initiative for oneAPI built atop Khronos’ SYCL single source programming standard and ISO C++.
It looks like Intel will soon be tagging their oneAPI Level Zero specification as version 1.0.
At the end of last year Intel published the oneAPI Level Zero specification as a low-level API for direct-to-metal interfaces for offload accelerators like FPGAs and GPUs. In the months since they have continued advancing the Level Zero interface and implementation within the Intel software stack (along with the other oneAPI components at large) while it’s looking like Level Zero v1.0 is around the corner.
A few days back we wrote of Intel’s ISPC compiler landing GPU code generation support for their UHD/Iris/Xe Graphics from Gen9 Skylake and beyond. Following that code being merged, ISPC 1.14.0 was quickly tagged. Intel ISPC 1.14.0 was released shortly after the GPU support code landed for the Implicit SPMD Program Compiler. See more details on the GPU code landing in the aforelinked article. It’s an exciting milestone and another great Intel software achievement playing into their oneAPI efforts.
Besides the code itself to Intel’s oneAPI being open-source, the company is being surprisingly open about its support even for areas of usage outside of x86_64 CPUs. In addition to the likes of getting Intel oneAPI / Data Parallel C++ on NVIDIA GPUs and other “open” efforts around APIs, they have shown willingness to see different oneAPI components working on non-x86_64 architectures.
Intel is the prime vendor for the first US exascale supercomputer, the Aurora system, scheduled for delivery in 2021 at Argonne National Lab. The late Rich Brueckner of insideHPC caught up with Intel’s senior principal engineer and chief architect for HPC, Robert Wisniewski, to learn more.
Intel’s open-source Compute Runtime stack for providing OpenCL and oneAPI Level Zero support for their graphics hardware has now rolled out support for the DG1 Xe discrete graphics card. Building off the DG1 support that has materialized for the Linux kernel and other components, most recently the IGC graphics compiler now supporting DG1, today’s release of the Intel Compute Runtime has DG1 support in place.
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification. oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
At the start of this week’s ISC High Performance conference, the Swedish e-Science Research Center (SeRC) is delighted to announce that it is Intel’s first oneAPI academic Center of Excellence (COE).
Intel’s oneAPI crew just released version 2020-03 (though one would have thought it should be 2020-05) of their Data Parallel C++ (DPC++) compiler and with this release are several new features including the NVIDIA CUDA back-end.
In this Code Together podcast, Nicole Huesman hosts Alice Chan from Intel and Hal Finkel from Argonne National Lab to discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning.
Data Parallel C++ (DPC++) is a high-level language designed for data parallel programming productivity. Get the essentials, including hands-on practice, in this self-guided training course within the Intel® DevCloud for oneAPI.
This week is the eighth annual International Workshop on OpenCL, SYCL, Vulkan, and SPIR-V, and the event is available online for the very first time in its history thanks to the coronavirus pandemic.
Codeplay has made significant contributions to enabling an open standard, cross-architecture interface for developers as part of the oneAPI industry initiative.
Software developers are looking more than ever at how they can accelerate their applications without having to write optimized processor specific code.
Intel engineers have outed a new version of oneDNN, the library formerly known as DNNL and before that MKL-DNN for providing a deep neural network library geared for high performance deep learning applications.
In this guest blog, Michael Wong, Chair of the SYCL Working Group and Vice President of Research and Development at Codeplay Software Ltd, reflects on the evolution of SYCL in the past two years.
Intel’s open-source teams have been issuing a slew of new packages in recent days.
The Intel Graphics Compiler (IGC) and now in turn the Intel Compute Runtime have updated their compiler stack against the newly released LLVM Clang 10.0.
GPUs offer the promise of tremendous compute power for HPC applications (like AI and DL/ML) … of which a majority are developed to run only on high-end CPUs. So how does a developer run AI apps on both CPU and Xe GPU platforms?
The oneAPI specification v0.7 has been released, which defines the programming interface for core elements of oneAPI, including the DPC++ compiler, libraries, and Level Zero driver. This latest release includes several enhancements to DPC++ including 10 new language extensions, as well as updates to many of the libraries, among other improvements.
Last week Intel released an initial set of micro-benchmarks for their oneAPI Level Zero and with L0 support being plumbed into their open-source Intel Compute Runtime, this weekend I started toying around with some Level Zero benchmarks on a variety of Intel processors.
To address the lack of an industry-standard interface for math libraries and provide a single, cross-architecture API for CPUs and accelerators, Intel released the oneAPI Math Kernel Library (oneMKL) open source interface.
Programming languages are a dime a dozen; throw a rock in any direction and you’ll hit one. Question is … can you use any of them to program data-centric applications that are deployable across CPUs, GPUs, FPGAs, and AI accelerators? You can now.
Today Intel introduced the oneAPI DevCloud to make it easier and more productive for coders currently working from home.
As part of its Virtual Game Developers Conference (GDC) 2020, Intel has put a presentation online detailing the features of its oneAPI Rendering Toolkit that are applicable for games. These libraries include Embree, OSPRay, Open VKL, OpenSWR and Open Image Denoise. Intel also announced that some will receive GPU support soon.
In this podcast, the Radio Free HPC team looks at Intel’s oneAPI project.
Intel’s open-source Compute Runtime for OpenCL and now oneAPI support on Linux has added oneAPI Level Zero support. Read more
Intel has added bare-metal oneAPI support to its open-source Graphics Compute Runtime for OpenCL and oneAPI, according to a Phoronix report on Monday. This brings oneAPI Level Zero to Linux.
In this video from the Intel HPC Developer Conference, Bill Savage from Intel presents: oneAPI: Single Programming Model to Deliver Cross-Architecture Performance. Read More
At The Next FPGA Platform event in San Jose, California on January 22, Jose Alvarez, Intel PSG CTO, Jose Alvarez outlined the three levels of heterogeneous integration. Read More
In this article, we’ll dive into the newly announced oneAPI, a single, unified programming model that aims to simplify development across multiple architectures, such as CPUs, GPUs, FPGAs and other accelerators. Read More
Writing software to run efficiently on today’s heterogeneous compute architectures is an ongoing challenge made increasingly difficult by the growing number of processor and accelerator choices. Read More
Codeplay has been a part of the SYCL™ community from the beginning, and our team has worked with peers from some of the largest semiconductor vendors including Intel and Xilinx for the past 5 years to define the SYCL standard. Read More
The Khronos SYCL standard as a single-source C++-based programming model for OpenCL is one of the exciting elements for Intel’s GPU compute plans with the forthcoming Xe graphics cards and fits into their oneAPI umbrella. Read More
The SYCL programming model from Khronos is a single-source C++ open-standard programming model for programming heterogeneous systems. Read More
Moving an application to a new processor type or chip vendor means creating an entirely new code base. Read More