# Interface Concepts¶

This page provides a brief description of the theoretical foundations and the practical implementation of the libCEED library.

Developers may also want to consult the automatically updated Doxygen documentation.

## Theoretical Framework¶

In finite element formulations, the weak form of a Partial Differential Equation (PDE) is evaluated on a subdomain \(\Omega_e\) (element) and the local results are composed into a larger system of equations that models the entire problem on the global domain \(\Omega\). In particular, when high-order finite elements or spectral elements are used, the resulting sparse matrix representation of the global operator is computationally expensive, with respect to both the memory transfer and floating point operations needed for its evaluation. libCEED provides an interface for matrix-free operator description that enables efficient evaluation on a variety of computational device types (selectable at run time). We present here the notation and the mathematical formulation adopted in libCEED.

We start by considering the discrete residual \(F(u)=0\) formulation in weak form. We first define the \(L^2\) inner product between real-valued functions

where \(\bm{x} \in \mathbb{R}^d \supset \Omega\).

We want to find \(u\) in a suitable space \(V_D\), such that

for all \(\bm v\) in the corresponding homogeneous space \(V_0\), where \(\bm f_0\) and \(\bm f_1\) contain all possible sources in the problem. We notice here that \(\bm f_0\) represents all terms in (1) which multiply the (possibly vector-valued) test function \(\bm v\) and \(\bm f_1\) all terms which multiply its gradient \(\nabla \bm v\). For an n-component problems in \(d\) dimensions, \(\bm f_0 \in \mathbb{R}^n\) and \(\bm f_1 \in \mathbb{R}^{nd}\).

Note

The notation \(\nabla \bm v \!:\! \bm f_1\) represents contraction over both fields and spatial dimensions while a single dot represents contraction in just one, which should be clear from context, e.g., \(\bm v \cdot \bm f_0\) contracts only over fields.

Note

In the code, the function that represents the weak form at quadrature
points is called the CeedQFunction. In the Examples provided with the
library (in the `examples/`

directory), we store the term \(\bm f_0\) directly
into v, and the term \(\bm f_1\) directly into dv (which stands for
\(\nabla \bm v\)). If equation (1) only presents a term of the
type \(\bm f_0\), the CeedQFunction will only have one output argument,
namely v. If equation (1) also presents a term of the type
\(\bm f_1\), then the CeedQFunction will have two output arguments, namely,
v and dv.

## Finite Element Operator Decomposition¶

Finite element operators are typically defined through weak formulations of
partial differential equations that involve integration over a computational
mesh. The required integrals are computed by splitting them as a sum over the
mesh elements, mapping each element to a simple *reference* element (e.g. the
unit square) and applying a quadrature rule in reference space.

This sequence of operations highlights an inherent hierarchical structure
present in all finite element operators where the evaluation starts on *global
(trial) degrees of freedom (dofs) or nodes on the whole mesh*, restricts to
*dofs on subdomains* (groups of elements), then moves to independent
*dofs on each element*, transitions to independent *quadrature points* in
reference space, performs the integration, and then goes back in reverse order
to global (test) degrees of freedom on the whole mesh.

This is illustrated below for the simple case of symmetric linear operator on
third order (\(Q_3\)) scalar continuous (\(H^1\)) elements, where we use
the notions **T-vector**, **L-vector**, **E-vector** and **Q-vector** to represent
the sets corresponding to the (true) degrees of freedom on the global mesh, the split
local degrees of freedom on the subdomains, the split degrees of freedom on the
mesh elements, and the values at quadrature points, respectively.

We refer to the operators that connect the different types of vectors as:

Subdomain restriction \(\bm{P}\)

Element restriction \(\bm{G}\)

Basis (Dofs-to-Qpts) evaluator \(\bm{B}\)

Operator at quadrature points \(\bm{D}\)

More generally, when the test and trial space differ, they get their own versions of \(\bm{P}\), \(\bm{G}\) and \(\bm{B}\).

Note that in the case of adaptive mesh refinement (AMR), the restrictions \(\bm{P}\) and \(\bm{G}\) will involve not just extracting sub-vectors, but evaluating values at constrained degrees of freedom through the AMR interpolation. There can also be several levels of subdomains (\(\bm P_1\), \(\bm P_2\), etc.), and it may be convenient to split \(\bm{D}\) as the product of several operators (\(\bm D_1\), \(\bm D_2\), etc.).

### Terminology and Notation¶

Vector representation/storage categories:

True degrees of freedom/unknowns,

**T-vector**:each unknown \(i\) has exactly one copy, on exactly one processor, \(rank(i)\)

this is a non-overlapping vector decomposition

usually includes any essential (fixed) dofs.

Local (w.r.t. processors) degrees of freedom/unknowns,

**L-vector**:each unknown \(i\) has exactly one copy on each processor that owns an element containing \(i\)

this is an overlapping vector decomposition with overlaps only across different processors—there is no duplication of unknowns on a single processor

the shared dofs/unknowns are the overlapping dofs, i.e. the ones that have more than one copy, on different processors.

Per element decomposition,

**E-vector**:each unknown \(i\) has as many copies as the number of elements that contain \(i\)

usually, the copies of the unknowns are grouped by the element they belong to.

In the case of AMR with hanging nodes (giving rise to hanging dofs):

the

**L-vector**is enhanced with the hanging/dependent dofsthe additional hanging/dependent dofs are duplicated when they are shared by multiple processors

this way, an

**E-vector**can be derived from an**L-vector**without any communications and without additional computations to derive the dependend dofsin other words, an entry in an

**E-vector**is obtained by copying an entry from the corresponding**L-vector**, optionally switching the sign of the entry (for \(H(\mathrm{div})\)—and \(H(\mathrm{curl})\)-conforming spaces).

In the case of variable order spaces:

the dependent dofs (usually on the higher-order side of a face/edge) can be treated just like the hanging/dependent dofs case.

Quadrature point vector,

**Q-vector**:this is similar to

**E-vector**where instead of dofs, the vector represents values at quadrature points, grouped by element.

In many cases it is useful to distinguish two types of vectors:

**X-vector**, or**primal X-vector**, and**X’-vector**, or**dual X-vector**here X can be any of the T, L, E, or Q categories

for example, the mass matrix operator maps a

**T-vector**to a**T’-vector**the solutions vector is a

**T-vector**, and the RHS vector is a**T’-vector**using the parallel prolongation operator, one can map the solution

**T-vector**to a solution**L-vector**, etc.

Operator representation/storage/action categories:

Full true-dof parallel assembly,

**TA**, or**A**:ParCSR or similar format

the T in TA indicates that the data format represents an operator from a

**T-vector**to a**T’-vector**.

Full local assembly,

**LA**:CSR matrix on each rank

the parallel prolongation operator, \(\bm{P}\), (and its transpose) should use optimized matrix-free action

note that \(\bm{P}\) is the operator mapping T-vectors to L-vectors.

Element matrix assembly,

**EA**:each element matrix is stored as a dense matrix

optimized element and parallel prolongation operators

note that the element prolongation operator is the mapping from an

**L-vector**to an**E-vector**.

Quadrature-point/partial assembly,

**QA**or**PA**:precompute and store \(w\det(J)\) at all quadrature points in all mesh elements

the stored data can be viewed as a

**Q-vector**.

Unassembled option,

**UA**or**U**:no assembly step

the action uses directly the mesh node coordinates, and assumes specific form of the coefficient, e.g. constant, piecewise-constant, or given as a

**Q-vector**(Q-coefficient).

### Partial Assembly¶

Since the global operator \(\bm{A}\) is just a series of variational restrictions
with \(\bm{B}\), \(\bm{G}\) and \(\bm{P}\), starting from its
point-wise kernel \(\bm{D}\), a “matvec” with \(\bm{A}\) can be
performed by evaluating and storing some of the innermost variational restriction
matrices, and applying the rest of the operators “on-the-fly”. For example, one can
compute and store a global matrix on **T-vector** level. Alternatively, one can compute
and store only the subdomain (**L-vector**) or element (**E-vector**) matrices and
perform the action of \(\bm{A}\) using matvecs with \(\bm{P}\) or
\(\bm{P}\) and \(\bm{G}\). While these options are natural for
low-order discretizations, they are not a good fit for high-order methods due to
the amount of FLOPs needed for their evaluation, as well as the memory transfer
needed for a matvec.

Our focus in libCEED, instead, is on **partial assembly**, where we compute and
store only \(\bm{D}\) (or portions of it) and evaluate the actions of
\(\bm{P}\), \(\bm{G}\) and \(\bm{B}\) on-the-fly.
Critically for performance, we take advantage of the tensor-product structure of the
degrees of freedom and quadrature points on *quad* and *hex* elements to perform the
action of \(\bm{B}\) without storing it as a matrix.

Implemented properly, the partial assembly algorithm requires optimal amount of
memory transfers (with respect to the polynomial order) and near-optimal FLOPs
for operator evaluation. It consists of an operator *setup* phase, that
evaluates and stores \(\bm{D}\) and an operator *apply* (evaluation) phase that
computes the action of \(\bm{A}\) on an input vector. When desired, the setup
phase may be done as a side-effect of evaluating a different operator, such as a
nonlinear residual. The relative costs of the setup and apply phases are
different depending on the physics being expressed and the representation of
\(\bm{D}\).

### Parallel Decomposition¶

After the application of each of the first three transition operators, \(\bm{P}\), \(\bm{G}\) and \(\bm{B}\), the operator evaluation is decoupled on their ranges, so \(\bm{P}\), \(\bm{G}\) and \(\bm{B}\) allow us to “zoom-in” to subdomain, element and quadrature point level, ignoring the coupling at higher levels.

Thus, a natural mapping of \(\bm{A}\) on a parallel computer is to split the
**T-vector** over MPI ranks (a non-overlapping decomposition, as is typically
used for sparse matrices), and then split the rest of the vector types over
computational devices (CPUs, GPUs, etc.) as indicated by the shaded regions in
the diagram above.

One of the advantages of the decomposition perspective in these settings is that the operators \(\bm{P}\), \(\bm{G}\), \(\bm{B}\) and \(\bm{D}\) clearly separate the MPI parallelism in the operator (\(\bm{P}\)) from the unstructured mesh topology (\(\bm{G}\)), the choice of the finite element space/basis (\(\bm{B}\)) and the geometry and point-wise physics \(\bm{D}\). These components also naturally fall in different classes of numerical algorithms – parallel (multi-device) linear algebra for \(\bm{P}\), sparse (on-device) linear algebra for \(\bm{G}\), dense/structured linear algebra (tensor contractions) for \(\bm{B}\) and parallel point-wise evaluations for \(\bm{D}\).

Currently in libCEED, it is assumed that the host application manages the global
**T-vectors** and the required communications among devices (which are generally
on different compute nodes) with **P**. Our API is thus focused on the
**L-vector** level, where the logical devices, which in the library are
represented by the `Ceed`

object, are independent. Each MPI rank can use one or
more `Ceed`

s, and each `Ceed`

, in turn, can represent one or more physical
devices, as long as libCEED backends support such configurations. The idea is
that every MPI rank can use any logical device it is assigned at runtime. For
example, on a node with 2 CPU sockets and 4 GPUs, one may decide to use 6 MPI
ranks (each using a single `Ceed`

object): 2 ranks using 1 CPU socket each, and
4 using 1 GPU each. Another choice could be to run 1 MPI rank on the whole node
and use 5 `Ceed`

objects: 1 managing all CPU cores on the 2 sockets and 4
managing 1 GPU each. The communications among the devices, e.g. required for
applying the action of \(\bm{P}\), are currently out of scope of libCEED. The
interface is non-blocking for all operations involving more than O(1) data,
allowing operations performed on a coprocessor or worker threads to overlap with
operations on the host.

## API Description¶

The libCEED API takes an algebraic approach, where the user essentially
describes in the *frontend* the operators **G**, **B** and **D** and the library
provides *backend* implementations and coordinates their action to the original
operator on **L-vector** level (i.e. independently on each device / MPI task).

One of the advantages of this purely algebraic description is that it already includes all the finite element information, so the backends can operate on linear algebra level without explicit finite element code. The frontend description is general enough to support a wide variety of finite element algorithms, as well as some other types algorithms such as spectral finite differences. The separation of the front- and backends enables applications to easily switch/try different backends. It also enables backend developers to impact many applications from a single implementation.

Our long-term vision is to include a variety of backend implementations in libCEED, ranging from reference kernels to highly optimized kernels targeting specific devices (e.g. GPUs) or specific polynomial orders. A simple reference backend implementation is provided in the file ceed-ref.c.

On the frontend, the mapping between the decomposition concepts and the code implementation is as follows:

**L-**,**E-**and**Q-vector**are represented as variables of type CeedVector. (A backend may choose to operate incrementally without forming explicit**E-**or**Q-vectors**.)\(\bm{G}\) is represented as variable of type CeedElemRestriction.

\(\bm{B}\) is represented as variable of type CeedBasis.

the action of \(\bm{D}\) is represented as variable of type CeedQFunction.

the overall operator \(\bm{G}^T \bm{B}^T \bm{D} \bm{B} \bm{G}\) is represented as variable of type CeedOperator and its action is accessible through

`CeedOperatorApply()`

.

To clarify these concepts and illustrate how they are combined in the API, consider the implementation of the action of a simple 1D mass matrix (cf. tests/t500-operator.c).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | ```
/// @file
/// Test creation, action, and destruction for mass matrix operator
/// \test Test creation, action, and destruction for mass matrix operator
#include <ceed.h>
#include <stdlib.h>
#include <math.h>
#include "t500-operator.h"
int main(int argc, char **argv) {
Ceed ceed;
CeedElemRestriction Erestrictx, Erestrictu, Erestrictui;
CeedBasis bx, bu;
CeedQFunction qf_setup, qf_mass;
CeedOperator op_setup, op_mass;
CeedVector qdata, X, U, V;
const CeedScalar *hv;
CeedInt nelem = 15, P = 5, Q = 8;
CeedInt Nx = nelem+1, Nu = nelem*(P-1)+1;
CeedInt indx[nelem*2], indu[nelem*P];
CeedScalar x[Nx];
//! [Ceed Init]
CeedInit(argv[1], &ceed);
//! [Ceed Init]
for (CeedInt i=0; i<Nx; i++)
x[i] = (CeedScalar) i / (Nx - 1);
for (CeedInt i=0; i<nelem; i++) {
indx[2*i+0] = i;
indx[2*i+1] = i+1;
}
//! [ElemRestr Create]
CeedElemRestrictionCreate(ceed, nelem, 2, 1, 1, Nx, CEED_MEM_HOST,
CEED_USE_POINTER, indx, &Erestrictx);
//! [ElemRestr Create]
for (CeedInt i=0; i<nelem; i++) {
for (CeedInt j=0; j<P; j++) {
indu[P*i+j] = i*(P-1) + j;
}
}
//! [ElemRestrU Create]
CeedElemRestrictionCreate(ceed, nelem, P, 1, 1, Nu, CEED_MEM_HOST,
CEED_USE_POINTER, indu, &Erestrictu);
CeedInt stridesu[3] = {1, Q, Q};
CeedElemRestrictionCreateStrided(ceed, nelem, Q, 1, Q*nelem, stridesu,
&Erestrictui);
//! [ElemRestrU Create]
//! [Basis Create]
CeedBasisCreateTensorH1Lagrange(ceed, 1, 1, 2, Q, CEED_GAUSS, &bx);
CeedBasisCreateTensorH1Lagrange(ceed, 1, 1, P, Q, CEED_GAUSS, &bu);
//! [Basis Create]
//! [QFunction Create]
CeedQFunctionCreateInterior(ceed, 1, setup, setup_loc, &qf_setup);
CeedQFunctionAddInput(qf_setup, "_weight", 1, CEED_EVAL_WEIGHT);
CeedQFunctionAddInput(qf_setup, "dx", 1, CEED_EVAL_GRAD);
CeedQFunctionAddOutput(qf_setup, "rho", 1, CEED_EVAL_NONE);
CeedQFunctionCreateInterior(ceed, 1, mass, mass_loc, &qf_mass);
CeedQFunctionAddInput(qf_mass, "rho", 1, CEED_EVAL_NONE);
CeedQFunctionAddInput(qf_mass, "u", 1, CEED_EVAL_INTERP);
CeedQFunctionAddOutput(qf_mass, "v", 1, CEED_EVAL_INTERP);
//! [QFunction Create]
//! [Setup Create]
CeedOperatorCreate(ceed, qf_setup, CEED_QFUNCTION_NONE, CEED_QFUNCTION_NONE,
&op_setup);
//! [Setup Create]
//! [Operator Create]
CeedOperatorCreate(ceed, qf_mass, CEED_QFUNCTION_NONE, CEED_QFUNCTION_NONE,
&op_mass);
//! [Operator Create]
CeedVectorCreate(ceed, Nx, &X);
CeedVectorSetArray(X, CEED_MEM_HOST, CEED_USE_POINTER, x);
CeedVectorCreate(ceed, nelem*Q, &qdata);
//! [Setup Set]
CeedOperatorSetField(op_setup, "_weight", CEED_ELEMRESTRICTION_NONE, bx,
CEED_VECTOR_NONE);
CeedOperatorSetField(op_setup, "dx", Erestrictx, bx, CEED_VECTOR_ACTIVE);
CeedOperatorSetField(op_setup, "rho", Erestrictui, CEED_BASIS_COLLOCATED,
CEED_VECTOR_ACTIVE);
//! [Setup Set]
//! [Operator Set]
CeedOperatorSetField(op_mass, "rho", Erestrictui,CEED_BASIS_COLLOCATED,
qdata);
CeedOperatorSetField(op_mass, "u", Erestrictu, bu, CEED_VECTOR_ACTIVE);
CeedOperatorSetField(op_mass, "v", Erestrictu, bu, CEED_VECTOR_ACTIVE);
//! [Operator Set]
//! [Setup Apply]
CeedOperatorApply(op_setup, X, qdata, CEED_REQUEST_IMMEDIATE);
//! [Setup Apply]
CeedVectorCreate(ceed, Nu, &U);
CeedVectorSetValue(U, 0.0);
CeedVectorCreate(ceed, Nu, &V);
//! [Operator Apply]
CeedOperatorApply(op_mass, U, V, CEED_REQUEST_IMMEDIATE);
//! [Operator Apply]
CeedVectorGetArrayRead(V, CEED_MEM_HOST, &hv);
for (CeedInt i=0; i<Nu; i++)
if (fabs(hv[i]) > 1e-14) printf("[%d] v %g != 0.0\n",i, hv[i]);
CeedVectorRestoreArrayRead(V, &hv);
CeedQFunctionDestroy(&qf_setup);
CeedQFunctionDestroy(&qf_mass);
CeedOperatorDestroy(&op_setup);
CeedOperatorDestroy(&op_mass);
CeedElemRestrictionDestroy(&Erestrictu);
CeedElemRestrictionDestroy(&Erestrictx);
CeedElemRestrictionDestroy(&Erestrictui);
CeedBasisDestroy(&bu);
CeedBasisDestroy(&bx);
CeedVectorDestroy(&X);
CeedVectorDestroy(&U);
CeedVectorDestroy(&V);
CeedVectorDestroy(&qdata);
CeedDestroy(&ceed);
return 0;
}
``` |

The constructor

```
CeedInit(argv[1], &ceed);
```

creates a logical device `ceed`

on the specified *resource*, which could also be
a coprocessor such as `"/nvidia/0"`

. There can be any number of such devices,
including multiple logical devices driving the same resource (though performance
may suffer in case of oversubscription). The resource is used to locate a
suitable backend which will have discretion over the implementations of all
objects created with this logical device.

The `setup`

routine above computes and stores \(\bm{D}\), in this case a
scalar value in each quadrature point, while `mass`

uses these saved values to perform
the action of \(\bm{D}\). These functions are turned into the `CeedQFunction`

variables `qf_setup`

and `qf_mass`

in the `CeedQFunctionCreateInterior()`

calls:

```
CeedQFunctionCreateInterior(ceed, 1, setup, setup_loc, &qf_setup);
CeedQFunctionAddInput(qf_setup, "_weight", 1, CEED_EVAL_WEIGHT);
CeedQFunctionAddInput(qf_setup, "dx", 1, CEED_EVAL_GRAD);
CeedQFunctionAddOutput(qf_setup, "rho", 1, CEED_EVAL_NONE);
CeedQFunctionCreateInterior(ceed, 1, mass, mass_loc, &qf_mass);
CeedQFunctionAddInput(qf_mass, "rho", 1, CEED_EVAL_NONE);
CeedQFunctionAddInput(qf_mass, "u", 1, CEED_EVAL_INTERP);
CeedQFunctionAddOutput(qf_mass, "v", 1, CEED_EVAL_INTERP);
```

A CeedQFunction performs independent operations at each quadrature point and
the interface is intended to facilitate vectorization. The second argument is
an expected vector length. If greater than 1, the caller must ensure that the
number of quadrature points `Q`

is divisible by the vector length. This is
often satisfied automatically due to the element size or by batching elements
together to facilitate vectorization in other stages, and can always be ensured
by padding.

In addition to the function pointers (`setup`

and `mass`

), CeedQFunction
constructors take a string representation specifying where the source for the
implementation is found. This is used by backends that support Just-In-Time
(JIT) compilation (i.e., CUDA and OCCA) to compile for coprocessors.

Different input and output fields are added individually, specifying the field name, size of the field, and evaluation mode.

The size of the field is provided by a combination of the number of components the effect of any basis evaluations.

The evaluation mode (see Typedefs and Enumerations) `CEED_EVAL_INTERP`

for both input and output fields indicates that the mass operator only contains terms of
the form

where \(v\) are test functions (see the Theoretical Framework). More general operators, such as those of the form

can be expressed.

For fields with derivatives, such as with the basis evaluation mode
(see Typedefs and Enumerations) `CEED_EVAL_GRAD`

, the size of the
field needs to reflect both the number of components and the geometric dimension.
A 3-dimensional gradient on four components would therefore mean the field has a size of
12.

The \(\bm{B}\) operators for the mesh nodes, `bx`

, and the unknown field,
`bu`

, are defined in the calls to the function `CeedBasisCreateTensorH1Lagrange()`

.
In this example, both the mesh and the unknown field use \(H^1\) Lagrange finite
elements of order 1 and 4 respectively (the `P`

argument represents the number of 1D
degrees of freedom on each element). Both basis operators use the same integration rule,
which is Gauss-Legendre with 8 points (the `Q`

argument).

```
CeedBasisCreateTensorH1Lagrange(ceed, 1, 1, 2, Q, CEED_GAUSS, &bx);
CeedBasisCreateTensorH1Lagrange(ceed, 1, 1, P, Q, CEED_GAUSS, &bu);
```

Other elements with this structure can be specified in terms of the `Q×P`

matrices that evaluate values and gradients at quadrature points in one
dimension using `CeedBasisCreateTensorH1()`

. Elements that do not have tensor
product structure, such as symmetric elements on simplices, will be created
using different constructors.

The \(\bm{G}\) operators for the mesh nodes, `Erestrictx`

, and the unknown field,
`Erestrictu`

, are specified in the `CeedElemRestrictionCreate()`

. Both of these
specify directly the dof indices for each element in the `indx`

and `indu`

arrays:

```
CeedElemRestrictionCreate(ceed, nelem, 2, 1, 1, Nx, CEED_MEM_HOST,
CEED_USE_POINTER, indx, &Erestrictx);
```

```
CeedElemRestrictionCreate(ceed, nelem, P, 1, 1, Nu, CEED_MEM_HOST,
CEED_USE_POINTER, indu, &Erestrictu);
CeedInt stridesu[3] = {1, Q, Q};
CeedElemRestrictionCreateStrided(ceed, nelem, Q, 1, Q*nelem, stridesu,
&Erestrictui);
```

If the user has arrays available on a device, they can be provided using
`CEED_MEM_DEVICE`

. This technique is used to provide no-copy interfaces in all
contexts that involve problem-sized data.

For discontinuous Galerkin and for applications such as Nek5000 that only
explicitly store **E-vectors** (inter-element continuity has been subsumed by
the parallel restriction \(\bm{P}\)), the element restriction \(\bm{G}\)
is the identity and `CeedElemRestrictionCreateStrided()`

is used instead.
We plan to support other structured representations of \(\bm{G}\) which will
be added according to demand. In the case of non-conforming mesh elements,
\(\bm{G}\) needs a more general representation that expresses values at slave
nodes (which do not appear in **L-vectors**) as linear combinations of the degrees of
freedom at master nodes.

These operations, \(\bm{P}\), \(\bm{B}\), and \(\bm{D}\),
are combined with a `CeedOperator`

. As with QFunctions, operator fields are added
separately with a matching field name, basis (\(\bm{B}\)), element restriction
(\(\bm{G}\)), and **L-vector**. The flag
`CEED_VECTOR_ACTIVE`

indicates that the vector corresponding to that field will
be provided to the operator when `CeedOperatorApply()`

is called. Otherwise the
input/output will be read from/written to the specified **L-vector**.

With partial assembly, we first perform a setup stage where \(\bm{D}\) is evaluated
and stored. This is accomplished by the operator `op_setup`

and its application
to `X`

, the nodes of the mesh (these are needed to compute Jacobians at
quadrature points). Note that the corresponding `CeedOperatorApply()`

has no basis
evaluation on the output, as the quadrature data is not needed at the dofs:

```
CeedOperatorCreate(ceed, qf_setup, CEED_QFUNCTION_NONE, CEED_QFUNCTION_NONE,
&op_setup);
```

```
CeedOperatorSetField(op_setup, "_weight", CEED_ELEMRESTRICTION_NONE, bx,
CEED_VECTOR_NONE);
CeedOperatorSetField(op_setup, "dx", Erestrictx, bx, CEED_VECTOR_ACTIVE);
CeedOperatorSetField(op_setup, "rho", Erestrictui, CEED_BASIS_COLLOCATED,
CEED_VECTOR_ACTIVE);
```

```
CeedOperatorApply(op_setup, X, qdata, CEED_REQUEST_IMMEDIATE);
```

The action of the operator is then represented by operator `op_mass`

and its
`CeedOperatorApply()`

to the input **L-vector** `U`

with output in `V`

:

```
CeedOperatorCreate(ceed, qf_mass, CEED_QFUNCTION_NONE, CEED_QFUNCTION_NONE,
&op_mass);
```

```
CeedOperatorSetField(op_mass, "rho", Erestrictui,CEED_BASIS_COLLOCATED,
qdata);
CeedOperatorSetField(op_mass, "u", Erestrictu, bu, CEED_VECTOR_ACTIVE);
CeedOperatorSetField(op_mass, "v", Erestrictu, bu, CEED_VECTOR_ACTIVE);
```

```
CeedOperatorApply(op_mass, U, V, CEED_REQUEST_IMMEDIATE);
```

A number of function calls in the interface, such as `CeedOperatorApply()`

, are
intended to support asynchronous execution via their last argument,
`CeedRequest*`

. The specific (pointer) value used in the above example,
`CEED_REQUEST_IMMEDIATE`

, is used to express the request (from the user) for the
operation to complete before returning from the function call, i.e. to make sure
that the result of the operation is available in the output parameters
immediately after the call. For a true asynchronous call, one needs to provide
the address of a user defined variable. Such a variable can be used later to
explicitly wait for the completion of the operation.

## Gallery of QFunctions¶

LibCEED provides a gallery of built-in QFunctions in the `gallery/`

directory.
The available QFunctions are the ones associated with the mass, the Laplacian, and
the identity operators. To illustrate how the user can declare a CeedQFunction
via the gallery of available QFunctions, consider the selection of the
CeedQFunction associated with a simple 1D mass matrix
(cf. tests/t410-qfunction.c).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | ```
/// @file
/// Test creation, evaluation, and destruction for qfunction by name
/// \test Test creation, evaluation, and destruction for qfunction by name
#include <ceed.h>
int main(int argc, char **argv) {
Ceed ceed;
CeedVector in[16], out[16];
CeedVector Qdata, J, W, U, V;
CeedQFunction qf_setup, qf_mass;
CeedInt Q = 8;
const CeedScalar *vv;
CeedScalar j[Q], w[Q], u[Q], v[Q];
CeedInit(argv[1], &ceed);
CeedQFunctionCreateInteriorByName(ceed, "Mass1DBuild", &qf_setup);
CeedQFunctionCreateInteriorByName(ceed, "MassApply", &qf_mass);
for (CeedInt i=0; i<Q; i++) {
CeedScalar x = 2.*i/(Q-1) - 1;
j[i] = 1;
w[i] = 1 - x*x;
u[i] = 2 + 3*x + 5*x*x;
v[i] = w[i] * u[i];
}
CeedVectorCreate(ceed, Q, &J);
CeedVectorSetArray(J, CEED_MEM_HOST, CEED_USE_POINTER, j);
CeedVectorCreate(ceed, Q, &W);
CeedVectorSetArray(W, CEED_MEM_HOST, CEED_USE_POINTER, w);
CeedVectorCreate(ceed, Q, &U);
CeedVectorSetArray(U, CEED_MEM_HOST, CEED_USE_POINTER, u);
CeedVectorCreate(ceed, Q, &V);
CeedVectorSetValue(V, 0);
CeedVectorCreate(ceed, Q, &Qdata);
CeedVectorSetValue(Qdata, 0);
{
in[0] = J;
in[1] = W;
out[0] = Qdata;
CeedQFunctionApply(qf_setup, Q, in, out);
}
{
in[0] = W;
in[1] = U;
out[0] = V;
CeedQFunctionApply(qf_mass, Q, in, out);
}
CeedVectorGetArrayRead(V, CEED_MEM_HOST, &vv);
for (CeedInt i=0; i<Q; i++)
if (v[i] != vv[i])
// LCOV_EXCL_START
printf("[%d] v %f != vv %f\n",i, v[i], vv[i]);
// LCOV_EXCL_STOP
CeedVectorRestoreArrayRead(V, &vv);
CeedVectorDestroy(&W);
CeedVectorDestroy(&U);
CeedVectorDestroy(&V);
CeedVectorDestroy(&Qdata);
CeedQFunctionDestroy(&qf_setup);
CeedQFunctionDestroy(&qf_mass);
CeedDestroy(&ceed);
return 0;
}
``` |

## Interface Principles and Evolution¶

LibCEED is intended to be extensible via backends that are packaged with the library and packaged separately (possibly as a binary containing proprietary code). Backends are registered by calling

```
CeedRegister("/cpu/self/ref/serial", CeedInit_Ref, 50);
```

typically in a library initializer or “constructor” that runs automatically.
`CeedInit`

uses this prefix to find an appropriate backend for the resource.

Source (API) and binary (ABI) stability are important to libCEED. LibCEED is evolving rapidly at present, but we expect it to stabilize soon at which point we will adopt semantic versioning. User code, including libraries of CeedQFunctions, will not need to be recompiled except between major releases. The backends currently have some dependence beyond the public user interface, but we intent to remove that dependence and will prioritize if anyone expresses interest in distributing a backend outside the libCEED repository.