OpenACC
Stable release |
2.5
/ October 2015 |
---|---|
Written in | C, C++, and Fortran |
Operating system | Cross-platform |
Platform | Cross-platform |
Type | API |
Website |
www |
OpenACC (for Open Accelerators) is a programming standard for parallel computing developed by Cray, CAPS, Nvidia and PGI. The standard is designed to simplify parallel programming of heterogeneous CPU/GPU systems.[1]
Like in OpenMP, the programmer can annotate C, C++ and Fortran source code to identify the areas that should be accelerated using compiler directives and additional functions.[2] Like OpenMP 4.0 and newer, code can be started on both the CPU and GPU.
OpenACC members have worked as members of the OpenMP standard group to merge into OpenMP specification to create a common specification which extends OpenMP to support accelerators in a future release of OpenMP.[3][4] These efforts resulted in a technical report[5] for comment and discussion timed to include the annual Supercomputing Conference (November 2012, Salt Lake City) and to address non-Nvidia accelerator support with input from hardware vendors who participate in OpenMP.[6]
At ISC’12 OpenACC was demonstrated to work on Nvidia, AMD and Intel accelerators, without performance data.[7]
In November 12, 2012, at the SC12 conference, a draft of the OpenACC version 2.0 specification was presented.[8] New suggested capabilities include new controls over data movement (such as better handling of unstructured data and improvements in support for non-contiguous memory), and support for explicit function calls and separate compilation (allowing the creation and reuse of libraries of accelerated code). OpenACC 2.0 was officially released in June 2013.[9]
Version 2.5 of the specification was released in October 2015.[10]
Compiler support
Support of OpenACC is available in commercial compilers from PGI (from version 12.6), and (for Cray hardware only) Cray. [7][11]
OpenUH[12] is an Open64 based open source OpenACC compiler supporting C and FORTRAN, developed by HPCTools group from University of Houston.
OpenARC[13] is an open source C compiler developed at Oak Ridge National Laboratory to support all features in the OpenACC 1.0 specification. An experimental[14] open source compiler, accULL, is developed by the University of La Laguna (C language only).[15]
GCC support for OpenACC was slow in coming.[16] A GPU-targeting implementation from Samsung was announced in September 2013; this translated OpenACC 1.1-annotated code to OpenCL.[14] The announcement of a "real" implementation followed two months later, this time from NVIDIA and based on OpenACC 2.0.[17] This sparked some controversy, as the implementation would only target NVIDIA's own PTX assembly language, for which no open source assembler or runtime was available.[18][19] Experimental support for OpenACC/PTX did end up in GCC as of version 5.1.[20][21]
Usage
In a way similar to OpenMP 3.x on homogeneous system or the earlier OpenHMPP, the primary mode of programming in OpenACC is directives.[22] The specifications also include a runtime library defining several support functions. To exploit them, user should include "openacc.h" in C or "openacc_lib.h" in Fortran;[23] and then call acc_init() function.
Directives
OpenACC defines an extensive list of pragmas (directives),[2] for example:
#pragma acc parallel
#pragma acc kernels
Both are used to define parallel computation kernels to be executed on the accelerator, using distinct semantics[24][25]
#pragma acc data
Is the main directive to define and copy data to and from the accelerator.
#pragma acc loop
Is used to define the type of parallelism in a parallel
or kernels
region.
#pragma acc cache
#pragma acc update
#pragma acc declare
#pragma acc wait
Runtime API
There are some runtime API functions defined too: acc_get_num_devices(), acc_set_device_type(), acc_get_device_type(), acc_set_device_num(), acc_get_device_num(), acc_async_test(), acc_async_test_all(), acc_async_wait(), acc_async_wait_all(), acc_init(), acc_shutdown(), acc_on_device(), acc_malloc(), acc_free().
OpenACC generally takes care of work organisation for the target device however this can be overridden through the use of gangs and workers. A gang consists of workers and operates over a number of processing elements (as with a workgroup in OpenCL).
See also
References
- ↑ "Nvidia, Cray, PGI, and CAPS launch 'OpenACC' programming standard for parallel computing". The Inquirer. 4 November 2011.
- 1 2 "OpenACC standard version 2.0" (PDF). OpenACC.org. Retrieved 14 January 2014.
- ↑ "How does the OpenACC API relate to the OpenMP API?". OpenACC.org. Retrieved 14 January 2014.
- ↑ "How did the OpenACC specifications originate?". OpenACC.org. Retrieved 14 January 2014.
- ↑ "The OpenMP Consortium Releases First Technical Report". OpenMP.org. 5 November 2012. Retrieved 14 January 2014.
- ↑ "OpenMP at SC12". OpenMP.org. 29 August 2012. Retrieved 14 January 2014.
- 1 2 "OpenACC Group Reports Expanding Support for Accelerator Programming Standard". HPCwire. 20 June 2012. Retrieved 14 January 2014.
- ↑ "OpenACC Version 2.0 Posted for Comment". OpenACC.org. 12 November 2012. Retrieved 14 January 2014.
- ↑ "OpenACC 2.0 Spec | www.openacc.org". www.openacc.org. Retrieved 2016-03-23.
- ↑ "OpenACC Standards Group Announces Release of the 2.5 Specification; Member Vendors Add Support for ARM & x86 as Parallel Devices | www.openacc.org". www.openacc.org. Retrieved 2016-03-22.
- ↑ "OpenACC Standard to Help Developers to Take Advantage of GPU Compute Accelerators". Xbit laboratories. 16 November 2011. Retrieved 14 January 2014.
- ↑ "OpenUH Compiler". Retrieved 4 March 2014.
- ↑ "OpenARC Compiler". Retrieved 4 November 2014.
- 1 2 Larabel, Michael (30 September 2013). "GCC Support Published For OpenACC On The GPU". Phoronix.
- ↑ "accULL The OpenACC research implementation". Retrieved 14 January 2014.
- ↑ Larabel, Michael (4 December 2012). "OpenACC Still Not Loved By Open Compilers". Phoronix.
- ↑ Larabel, Michael (14 November 2013). "OpenACC 2.0 With GPU Support Coming To GCC". Phoronix.
- ↑ Larabel, Michael (15 November 2013). "NVIDIA, Mentor Graphics May Harm GCC". Phoronix.
- ↑ Larabel, Michael (21 November 2013). "In-Fighting Continues Over OpenACC In GCC". Phoronix.
- ↑ https://gcc.gnu.org/wiki/OpenACC
- ↑ Schwinge, Thomas (15 January 2015). "Merge current set of OpenACC changes from gomp-4_0-branch". gcc (Mailing list). gcc.gnu.org. Retrieved 15 January 2015.
- ↑ "Easy GPU Parallelism with OpenACC". Dr.Dobb's. 11 June 2012. Retrieved 14 January 2014.
- ↑ "OpenACC API QuickReference Card, version 1.0" (PDF). NVidia. November 2011. Retrieved 14 January 2014.
- ↑ "OpenACC Kernels and Parallel Constructs". PGI insider. August 2012. Retrieved 14 January 2014.
- ↑ "OpenACC parallel section VS kernels". CAPS entreprise Knowledge Base. 3 January 2013. Retrieved 14 January 2014.
External links
- http://www.openacc.org/
- Usage example from NVIDIA: part1, part2