Changes/Release Notes¶
On this page we provide a summary of the main API changes, new features and examples for each release of libCEED.
Current Main¶
The current main
(formerly called master
) branch contains bug fixes and additional features.
Interface changes¶
New features¶
New HIP MAGMA backends for hipMAGMA library users:
/gpu/hip/magma
and/gpu/hip/magma/det
.Julia and Rust interfaces added, providing a nearly 1-1 correspondence with the C interface, plus some convenience features.
New HIP backends for improved tensor basis performance:
/gpu/hip/shared
and/gpu/hip/gen
.Static libraries can be built with
make STATIC=1
and the pkg-config file is installed accordingly.
Performance improvements¶
Examples¶
Solid mechanics mini-app example updated with traction boundary conditions.
v0.7 (Sep 29, 2020)¶
Interface changes¶
Replace limited
CeedInterlaceMode
with more flexible component stridecompstride
inCeedElemRestriction
constructors. As a result, theindices
parameter has been replaced withoffsets
and thennodes
parameter has been replaced withlsize
. These changes improve support for mixed finite element methods.Replace various uses of
Ceed*Get*Status
withCeed*Is*
in the backend API to match common nomenclature.Replace
CeedOperatorAssembleLinearDiagonal
withCeedOperatorLinearAssembleDiagonal()
for clarity.Linear Operators can be assembled as point-block diagonal matrices with
CeedOperatorLinearAssemblePointBlockDiagonal()
, provided in row-major form in ancomp
byncomp
block per node.Diagonal assemble interface changed to accept a CeedVector instead of a pointer to a CeedVector to reduce memory movement when interfacing with calling code.
Added
CeedOperatorLinearAssembleAddDiagonal()
andCeedOperatorLinearAssembleAddPointBlockDiagonal()
for improved future integration with codes such as MFEM that compose the action of CeedOperators external to libCEED.Added
CeedVectorTakeAray()
to sync and remove libCEED read/write access to an allocated array and pass ownership of the array to the caller. This function is recommended overCeedVectorSyncArray()
when theCeedVector
has an array owned by the caller that was set byCeedVectorSetArray()
.Added
CeedQFunctionContext
object to manage user QFunction context data and reduce copies between device and host memory.Added
CeedOperatorMultigridLevelCreate()
,CeedOperatorMultigridLevelCreateTensorH1()
, andCeedOperatorMultigridLevelCreateH1()
to facilitate creation of multigrid prolongation, restriction, and coarse grid operators using a common quadrature space.
New features¶
New HIP backend:
/gpu/hip/ref
.CeedQFunction support for user
CUfunction
s in some backends
Performance improvements¶
OCCA backend rebuilt to facilitate future performance enhancements.
Petsc BPs suite improved to reduce noise due to multiple calls to
mpiexec
.
Examples¶
Solid mechanics mini-app example updated with strain energy computation and more flexible boundary conditions.
Deprecated backends¶
The
/gpu/cuda/reg
backend has been removed, with its core features moved into/gpu/cuda/ref
and/gpu/cuda/shared
.
v0.6 (Mar 29, 2020)¶
libCEED v0.6 contains numerous new features and examples, as well as expanded documentation in this new website.
New features¶
New Python interface using CFFI provides a nearly 1-1 correspondence with the C interface, plus some convenience features. For instance, data stored in the
CeedVector
structure are available without copy asnumpy.ndarray
. Short tutorials are provided in Binder.Linear QFunctions can be assembled as block-diagonal matrices (per quadrature point,
CeedOperatorAssembleLinearQFunction()
) or to evaluate the diagonal (CeedOperatorAssembleLinearDiagonal()
). These operations are useful for preconditioning ingredients and are used in the libCEED’s multigrid examples.The inverse of separable operators can be obtained using
CeedOperatorCreateFDMElementInverse()
and applied withCeedOperatorApply()
. This is a useful preconditioning ingredient, especially for Laplacians and related operators.New functions:
CeedVectorNorm()
,CeedOperatorApplyAdd()
,CeedQFunctionView()
,CeedOperatorView()
.Make public accessors for various attributes to facilitate writing composable code.
New backend:
/cpu/self/memcheck/serial
.QFunctions using variable-length array (VLA) pointer constructs can be used with CUDA backends. (Single source is coming soon for OCCA backends.)
Fix some missing edge cases in CUDA backend.
Performance Improvements¶
MAGMA backend performance optimization and non-tensor bases.
No-copy optimization in
CeedOperatorApply()
.
Interface changes¶
Replace
CeedElemRestrictionCreateIdentity
andCeedElemRestrictionCreateBlocked
with more flexibleCeedElemRestrictionCreateStrided()
andCeedElemRestrictionCreateBlockedStrided()
.Add arguments to
CeedQFunctionCreateIdentity()
.Replace ambiguous uses of
CeedTransposeMode
for L-vector identification withCeedInterlaceMode
. This is now an attribute of theCeedElemRestriction
(seeCeedElemRestrictionCreate()
) and no longer passed aslmode
arguments toCeedOperatorSetField()
andCeedElemRestrictionApply()
.
Examples¶
libCEED-0.6 contains greatly expanded examples with new documentation. Notable additions include:
Standalone Ex2-Surface (
examples/ceed/ex2-surface
): compute the area of a domain in 1, 2, and 3 dimensions by applying a Laplacian.PETSc Area (
examples/petsc/area.c
): computes surface area of domains (like the cube and sphere) by direct integration on a surface mesh; demonstrates geometric dimension different from topological dimension.PETSc Bakeoff problems and generalizations:
examples/petsc/bpsraw.c
(formerlybps.c
): transparent CUDA support.examples/petsc/bps.c
(formerlybpsdmplex.c
): performance improvements and transparent CUDA support.Bakeoff problems on the cubed-sphere (
examples/petsc/bpssphere.c
): generalizations of all CEED BPs to the surface of the sphere; demonstrates geometric dimension different from topological dimension.
Multigrid (
examples/petsc/multigrid.c
): new p-multigrid solver with algebraic multigrid coarse solve.Compressible Navier-Stokes mini-app (
examples/fluids/navierstokes.c
; formerlyexamples/navier-stokes
): unstructured grid support (using PETSc’sDMPlex
), implicit time integration, SU/SUPG stabilization, free-slip boundary conditions, and quasi-2D computational domain support.Solid mechanics mini-app (
examples/solids/elasticity.c
): new solver for linear elasticity, small-strain hyperelasticity, and globalized finite-strain hyperelasticity using p-multigrid with algebraic multigrid coarse solve.
v0.5 (Sep 18, 2019)¶
For this release, several improvements were made. Two new CUDA backends were added to
the family of backends, of which, the new cuda-gen
backend achieves state-of-the-art
performance using single-source CeedQFunction. From this release, users
can define Q-Functions in a single source code independently of the targeted backend
with the aid of a new macro CEED QFUNCTION
to support JIT (Just-In-Time) and CPU
compilation of the user provided CeedQFunction code. To allow a unified
declaration, the CeedQFunction API has undergone a slight change:
the QFunctionField
parameter ncomp
has been changed to size
. This change
requires setting the previous value of ncomp
to ncomp*dim
when adding a
QFunctionField
with eval mode CEED EVAL GRAD
.
Additionally, new CPU backends
were included in this release, such as the /cpu/self/opt/*
backends (which are
written in pure C and use partial E-vectors to improve performance) and the
/cpu/self/ref/memcheck
backend (which relies upon the
Valgrind Memcheck tool to help verify that user
CeedQFunction have no undefined values).
This release also included various performance improvements, bug fixes, new examples,
and improved tests. Among these improvements, vectorized instructions for
CeedQFunction code compiled for CPU were enhanced by using CeedPragmaSIMD
instead of CeedPragmaOMP
, implementation of a CeedQFunction gallery and
identity Q-Functions were introduced, and the PETSc benchmark problems were expanded
to include unstructured meshes handling were. For this expansion, the prior version of
the PETSc BPs, which only included data associated with structured geometries, were
renamed bpsraw
, and the new version of the BPs, which can handle data associated
with any unstructured geometry, were called bps
. Additionally, other benchmark
problems, namely BP2 and BP4 (the vector-valued versions of BP1 and BP3, respectively),
and BP5 and BP6 (the collocated versions—for which the quadrature points are the same
as the Gauss Lobatto nodes—of BP3 and BP4 respectively) were added to the PETSc
examples. Furthermoew, another standalone libCEED example, called ex2
, which
computes the surface area of a given mesh was added to this release.
Backends available in this release:
CEED resource ( |
Backend |
|
Serial reference implementation |
|
Blocked reference implementation |
|
Memcheck backend, undefined value checks |
|
Serial optimized C implementation |
|
Blocked optimized C implementation |
|
Serial AVX implementation |
|
Blocked AVX implementation |
|
Serial LIBXSMM implementation |
|
Blocked LIBXSMM implementation |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
|
OpenCL OCCA kernels |
|
Reference pure CUDA kernels |
|
Pure CUDA kernels using one thread per element |
|
Optimized pure CUDA kernels using shared memory |
|
Optimized pure CUDA kernels using code generation |
|
CUDA MAGMA kernels |
Examples available in this release:
User code |
Example |
|
|
|
|
|
|
|
|
v0.4 (Apr 1, 2019)¶
libCEED v0.4 was made again publicly available in the second full CEED software
distribution, release CEED 2.0. This release contained notable features, such as
four new CPU backends, two new GPU backends, CPU backend optimizations, initial
support for operator composition, performance benchmarking, and a Navier-Stokes demo.
The new CPU backends in this release came in two families. The /cpu/self/*/serial
backends process one element at a time and are intended for meshes with a smaller number
of high order elements. The /cpu/self/*/blocked
backends process blocked batches of
eight interlaced elements and are intended for meshes with higher numbers of elements.
The /cpu/self/avx/*
backends rely upon AVX instructions to provide vectorized CPU
performance. The /cpu/self/xsmm/*
backends rely upon the
LIBXSMM package to provide vectorized CPU
performance. The /gpu/cuda/*
backends provide GPU performance strictly using CUDA.
The /gpu/cuda/ref
backend is a reference CUDA backend, providing reasonable
performance for most problem configurations. The /gpu/cuda/reg
backend uses a simple
parallelization approach, where each thread treats a finite element. Using just in time
compilation, provided by nvrtc (NVidia Runtime Compiler), and runtime parameters, this
backend unroll loops and map memory address to registers. The /gpu/cuda/reg
backend
achieve good peak performance for 1D, 2D, and low order 3D problems, but performance
deteriorates very quickly when threads run out of registers.
A new explicit time-stepping Navier-Stokes solver was added to the family of libCEED
examples in the examples/petsc
directory (see Compressible Navier-Stokes mini-app).
This example solves the time-dependent Navier-Stokes equations of compressible gas
dynamics in a static Eulerian three-dimensional frame, using structured high-order
finite/spectral element spatial discretizations and explicit high-order time-stepping
(available in PETSc). Moreover, the Navier-Stokes example was developed using PETSc,
so that the pointwise physics (defined at quadrature points) is separated from the
parallelization and meshing concerns.
Backends available in this release:
CEED resource ( |
Backend |
|
Serial reference implementation |
|
Blocked reference implementation |
|
Backend template, defaults to |
|
Serial AVX implementation |
|
Blocked AVX implementation |
|
Serial LIBXSMM implementation |
|
Blocked LIBXSMM implementation |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
|
OpenCL OCCA kernels |
|
Reference pure CUDA kernels |
|
Pure CUDA kernels using one thread per element |
|
CUDA MAGMA kernels |
Examples available in this release:
User code |
Example |
|
ex1 (volume) |
|
|
|
|
|
|
v0.3 (Sep 30, 2018)¶
Notable features in this release include active/passive field interface, support for
non-tensor bases, backend optimization, and improved Fortran interface. This release
also focused on providing improved continuous integration, and many new tests with code
coverage reports of about 90%. This release also provided a significant change to the
public interface: a CeedQFunction can take any number of named input and output
arguments while CeedOperator connects them to the actual data, which may be
supplied explicitly to CeedOperatorApply()
(active) or separately via
CeedOperatorSetField()
(passive). This interface change enables reusable libraries
of CeedQFunctions and composition of block solvers constructed using
CeedOperator. A concept of blocked restriction was added to this release and
used in an optimized CPU backend. Although this is typically not visible to the user,
it enables effective use of arbitrary-length SIMD while maintaining cache locality.
This CPU backend also implements an algebraic factorization of tensor product gradients
to perform fewer operations than standard application of interpolation and
differentiation from nodes to quadrature points. This algebraic formulation
automatically supports non-polynomial and non-interpolatory bases, thus is more general
than the more common derivation in terms of Lagrange polynomials on the quadrature points.
Backends available in this release:
CEED resource ( |
Backend |
|
Blocked reference implementation |
|
Serial reference implementation |
|
Backend template, defaults to |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
|
OpenCL OCCA kernels |
|
CUDA MAGMA kernels |
Examples available in this release:
User code |
Example |
|
ex1 (volume) |
|
|
|
|
|
|
v0.21 (Sep 30, 2018)¶
A MAGMA backend (which relies upon the MAGMA package) was integrated in libCEED for this release. This initial integration set up the framework of using MAGMA and provided the libCEED functionality through MAGMA kernels as one of libCEED’s computational backends. As any other backend, the MAGMA backend provides extended basic data structures for CeedVector, CeedElemRestriction, and CeedOperator, and implements the fundamental CEED building blocks to work with the new data structures. In general, the MAGMA-specific data structures keep the libCEED pointers to CPU data but also add corresponding device (e.g., GPU) pointers to the data. Coherency is handled internally, and thus seamlessly to the user, through the functions/methods that are provided to support them.
Backends available in this release:
CEED resource ( |
Backend |
|
Serial reference implementation |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
|
OpenCL OCCA kernels |
|
CUDA MAGMA kernels |
Examples available in this release:
User code |
Example |
|
ex1 (volume) |
|
|
|
BP1 (scalar mass operator) |
|
BP1 (scalar mass operator) |
v0.2 (Mar 30, 2018)¶
libCEED was made publicly available the first full CEED software distribution, release
CEED 1.0. The distribution was made available using the Spack package manager to provide
a common, easy-to-use build environment, where the user can build the CEED distribution
with all dependencies. This release included a new Fortran interface for the library.
This release also contained major improvements in the OCCA backend (including a new
/ocl/occa
backend) and new examples. The standalone libCEED example was modified to
compute the volume volume of a given mesh (in 1D, 2D, or 3D) and placed in an
examples/ceed
subfolder. A new mfem
example to perform BP3 (with the application
of the Laplace operator) was also added to this release.
Backends available in this release:
CEED resource ( |
Backend |
|
Serial reference implementation |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
|
OpenCL OCCA kernels |
Examples available in this release:
User code |
Example |
|
ex1 (volume) |
|
|
|
BP1 (scalar mass operator) |
|
BP1 (scalar mass operator) |
v0.1 (Jan 3, 2018)¶
Initial low-level API of the CEED project. The low-level API provides a set of Finite
Elements kernels and components for writing new low-level kernels. Examples include:
vector and sparse linear algebra, element matrix assembly over a batch of elements,
partial assembly and action for efficient high-order operators like mass, diffusion,
advection, etc. The main goal of the low-level API is to establish the basis for the
high-level API. Also, identifying such low-level kernels and providing a reference
implementation for them serves as the basis for specialized backend implementations.
This release contained several backends: /cpu/self
, and backends which rely upon the
OCCA package, such as /cpu/occa
,
/gpu/occa
, and /omp/occa
.
It also included several examples, in the examples
folder:
A standalone code that shows the usage of libCEED (with no external
dependencies) to apply the Laplace operator, ex1
; an mfem
example to perform BP1
(with the application of the mass operator); and a petsc
example to perform BP1
(with the application of the mass operator).
Backends available in this release:
CEED resource ( |
Backend |
|
Serial reference implementation |
|
Serial OCCA kernels |
|
CUDA OCCA kernels |
|
OpenMP OCCA kernels |
Examples available in this release:
User code |
Example |
|
ex1 (scalar Laplace operator) |
|
BP1 (scalar mass operator) |
|
BP1 (scalar mass operator) |