Spline Element Method for the MongeAmpère Equation
Abstract.
We consider numerical approximations of the MongeAmpère equation det with Dirichlet boundary conditions on a convex bounded domain in . We make a comparative study of three existing methods suitable for finite element computations. We construct conforming approximations in the framework of the spline element method where constraints and interelement continuities are enforced using Lagrange multipliers.
1. Introduction
This paper addresses the numerical solution of the Dirichlet problem for the MongeAmpère equation
(1.1) 
where is the Hessian of and are given functions with . The domain is a convex domain with polygonal boundary .
The above equation is a fully nonlinear equation in the sense that it is nonlinear in the highest order derivatives. Fully nonlinear equations have in general multiple solutions, and even if the domain is smooth, the solution may not be smooth. For the MongeAmpère equation, the notion of generalized solution in the sense of AlexandrovBakelman and that of viscosity solution [39] are the best known to give a meaning to the second derivatives even when the solution is not smooth. To a continuous convex function, one associates the socalled MongeAmpère measure and (1.1) is said to have a solution in the sense of Alexandrov if the density of that measure with respect to the Lebesgue measure is equal to . Continuous convex viscosity solutions are defined “ in terms of certain inequalities holding wherever the graph of the solution is touched on one side or the other by a smooth test function ” [44]. In the case of (1.1) the two notions are equivalent for continuous, [39]. We will assume throughout this paper that is continuous on and g continuous on . Equation (1.1) then has at most two solutions when , [24] p. 324. and a unique generalized solution in the class of convex functions, [1, 23]. In general, the theory of viscosity solutions [25, 19, 39] provides a framework for existence and uniqueness of solutions of fully nonlinear equations.
The more general MongeAmpère equation has form
(1.2) 
where denotes the gradient of and is is a given Hamiltonian, at least continuous and nondecreasing in u. They appear in various geometric and variational problems, e.g. the MongeKantorovich problem, and in kinetic theory. They also appear in applied fields such as meteorology, fluid mechanics, nonlinear elasticity, antenna design, material sciences and mathematical finance. A huge amount of literature on theoretical questions about these equations is available. A selection in the areas cited above include [2, 58, 12, 3, 19, 49, 38, 26, 21].
Researchers working on the MongeKantorovich Problem, MKP, c.f. [37] for background, have noted the problematic lack of good numerical solvers for the MongeAmpère type equations. Following [29], we quote from [15], ”It follows from this theoretical result that a natural computational solution of the MKP is the numerical resolution of the MongeAmpère equation” …”Unfortunately, this fully nonlinear secondorder elliptic equation has not received much attention from numerical analysts and, to the best of our knowledge, there is no efficient finitedifference or finiteelement methods, comparable to those developed for linear secondorder elliptic equations (such as fast Poisson solvers, multigrid methods, preconditioned conjugate gradient methods,…).”
Existing numerical work on the MongeAmpère type equations included [50, 42, 20] where the generalized solution in the sense of AlexandrovBakelman is approximated directly. Other works with proven convergence results is [48, 35] where finite difference schemes satisfying conditions for convergence of [14] were constructed. There have been an explosion of recent numerical results for the MongeAmpère equations. We have the recent papers [45, 46, 59, 18, 52] which do not address adequately the situations where the MongeAmpère equation does not have a smooth solution, c.f. Test 2 and Test 3 in Section 4. For progress in this direction we refer to the series of papers [28, 29, 27] and the vanishing moment method in [32, 33, 34]. Finite difference methods which computes viscosity solutions of the MongeAmpère equation and an iterative method amenable to finite element computations were reported in [16, 36]. See also [17] for an optimization approach.
In this paper, we use the spline element method to compute numerical solution of the MongeAmpère equation. It is a conforming finite element implementation with Lagrange multipliers. We will obtain conforming approximations for the three dimensional MongeAmpère equation. We extend the convergence analysis of Newton’s method due to [45] to bounded smooth domains using Schauder estimates proved in [56]. However [45] did not address the convergence of the discrete approximations. We give error estimates for conforming approximations of a smooth solution. The MongeAmpère equation leads to a noncoercive variational problem, a difficulty which is partially handled by the vanishing moment method (the parameter cannot be taken equal to 0). We show that for smooth solutions, the spline element method is robust for the associated singular perturbation problem. The numerical results mainly examine the performance of three numerical methods, Newton’s method, the vanishing moment method and the BenamouFroeseOberman iterative method, on three test functions suggested in [27]: a smooth radial solution, a nonsmooth solution for which no exact formula is known and a solution not in . In this paper, we will refer to the BenamouFroeseOberman iterative method as the BFO iterative method, which we extend to three dimensions.
The paper is organized as follows: In the first section, we review the spline element discretization. The following section is devoted to the variational formulations associated to Newton’s method, for which we give convergence results for smooth solutions, and the vanishing moment method. Here we introduce the three dimensional version of the BFO iterative method. The last section is devoted to numerical experiments. We will use for a generic constant but will index specific constants.
2. Spline element discretization
The spline element method has been described in [4, 7, 8, 13, 41] under different names and more recently in [6]. It can be described as a conforming finite element implementation with Lagrange multipliers. We first outline the main steps of the method, discuss its advantages and possible disadvantage. We then give more details of this approach but refer to the above references for explicit formulas.
First, start with a representation of a piecewise discontinuous polynomial as a vector in , for some integer . Then express boundary conditions and constraints including global continuity or smoothness conditions as linear relations. In our work, we use the Bernstein basis representation, [4, 6] which is very convenient to express smoothness conditions and very popular in computer aided geometric design. Hence the term “spline” in the name of the method. Splines are piecewise polynomials with smoothness properties. One then write a discrete version of the equation along with a discrete version of the spaces of trial and test functions. The boundary conditions and constraints are enforced using Lagrange multipliers. We are lead to saddle point problems which are solved by an augmented Lagrangian algorithm (sequences of linear equations with size ). The approach here should be contrasted with other approaches where Lagrange multipliers are introduced before discretization, i.e. in [9] or the discontinuous Galerkin methods.
The spline element method, stands out as a robust, flexible, efficient and accurate method. It can be applied to a wide range of PDEs in science and engineering in both two and three dimensions; constraints and smoothness are enforced exactly and there is no need to implement basis functions with the required properties; it is particularly suitable for fourth order PDEs; no infsup condition are needed to approximate Lagrange multipliers which arise due to the constraints, e.g. the pressure term in the NavierStokes equations; one gets in a single implementation approximations of variable order. Other advantages of the method include the flexibility of using polynomials of different degrees on different elements [41], the facility of implementing boundary conditions and the simplicity of a posteriori error estimates since the method is conforming for many problems. A possible disadvantage of this approach is the high number of degrees of freedom and the need to solve saddle point problems.
Let be a conforming partition of into triangles or tetrahedra. We consider a general variational problem: Find such that
(2.1) 
where and are respectively the space of trial and test functions. We will assume that the form is bounded and linear and is a continuous mapping in some sense on which is linear in the argument .
Let and be conforming subspaces of and respectively. We can write
for a suitable vector and a suitable matrix which encodes the constraints on the solution, e.g. smoothness and boundary conditions. Here is a discretization parameter which controls the size of the elements in the partition.
The condition for all translates to
for a suitable matrix which depends on and is a vector of coefficients associated to the linear form . If for example , then where is a mass matrix and a vector of coefficients associated to the spline interpolant of . In the linear case can be written .
Introducing a Lagrange multiplier , the functional
vanishes identically on . The stronger condition
along with the side condition are the discrete equations to be solved.
By a slight abuse of notation, after linearization by Newton’s method, the above nonlinear equation leads to solving systems of type
The approximation of thus is a limit of a sequence of solutions of systems of type
It is therefore enough to consider the linear case. If we assume for simplicity that and that the form is bilinear, symmetric, continuous and elliptic, existence of a discrete solution follows from LaxMilgram lemma. On the other hand, the ellipticity assures uniqueness of the component which can be retrieved by a least squares solution of the above system [4]. The Lagrange multiplier may not be unique. To avoid systems of large size, a variant of the augmented Lagrangian algorithm is used. For this, we consider the sequence of problems
(2.2) 
where is a suitable initial guess for example , is a suitable matrix and is a small parameter taken in practice in the order of . It is possible to solve for in terms of . A uniform convergence rate in for this algorithm was shown in [5].
3. Variational formulations
The BFO iterative method for solving (1.1) has a clear variational formulation as it consists in solving a sequence of Poisson problems:
(3.1) 
for the two dimensional case [16] and extended here to three dimensions
(3.2) 
with on . Since
(3.3) 
the above formula enforces partial convexity since is a necessary condition for the Hessian of to be semi positive definite. We note that the constant 2 in (3.1) may be changed to 4. We next discuss Newton’s method and the vanishing moment method. The use of Newton’s method for proving existence of a solution of (1.1) appeared in [10] combined with a method of continuity argument and more recently for approximation with a direct approach by finite differences in [45] for a MongeAmpère equation on the torus. We will extend the proof of convergence of Newton’s method of [45] in Hlder spaces on bounded smooth domains. We then characterize the Newton’s iterates as solutions of variational problems and a solution of (1.1) is also shown to be characterized by a variational formulation for which we derive error estimates for finite element approximations.
3.1. Newton’s method
We denote by the set of all functions having all derivatives of order continuous on where is a nonnegative integer or infinity and by , the set of all functions in whose derivatives of order have continuous extensions to . The norm in is given by
A function is said to be uniformly Hlder continuous with exponent in if the quantity
is finite. The space consists of functions whose th order derivatives are uniformly Hlder continuous with exponent in . It is a Banach space with norm
Next note that for any matrix , , where is the matrix of cofactors of and for two matrices , is the Kronecker product of and . For any sufficiently smooth matrix field and vector field , . Here the divergence of a matrix field is the divergence operator applied rowwise. If we put , then and But , c.f. for example [31] p. 440. Hence since and are symmetric matrices
(3.4) 
Put . The operator maps into . This can be seen from the properties of the Hlder norm of a product [22], p. 18. We have for sufficiently smooth. The Newton’s iterates can then equivalently be written
We will use the last expression as it requires less regularity on . More precisely, we will consider the following damped version of Newton’s method: Given an initial guess , we consider the sequence defined by
(3.5) 
for a parameter . Our numerical results however use only .
We recall that the domain is uniformly convex [40], if there exists a number such that through every point , there passes a supporting hyperplane of satisfying dist for . We will need the following global regularity result, [56].
Theorem 3.1.
Let be a uniformly convex domain in , with boundary in . Suppose , inf , and for some . Then (1.1) has a convex solution which satisfies the a priori estimate
where depends only on and .
According to [56], all assumptions in the above theorem are sharp. We have the following analogue of Theorem 2.1 in [45].
Theorem 3.2.
Proof.
The proof essentially depends on global Hlder regularity of the solution of the MongeAmpère equation. Theorem (3.1) essentially gives the conditions under which such a regularity result holds on a bounded domain. Part of the proof has been more or less repeated in [35]. We note that the iterates may converge to either a concave or a convex solution if both exists. ∎
3.2. Variational formulation
Using the divergence form of (3.5), the iterates can be characterized as solutions of the problems: Find on and
(3.6) 
With an initial guess which solves , for in , we have a sequence of uniformly elliptic problems, (see proof of Theorem 2.1 in [45]) and the problems (3.5) and (3.6) are equivalent. We then know that the iterates converge to a solution of (1.1) which solves the formal variational limit of (3.6): Find , on such that
(3.7) 
The problem (3.7) is not well posed in general. For example if , then both and are solutions.
To see that the left hand side is bounded for , notice that for
Next for and by the compactness of the embedding of in for when , the right hand side above is bounded by . However in three dimensions, the term involves the product of two second order derivatives. For it to be bounded, we will need so that and we can use the embedding of in for when . We give below error estimates only for the two dimensional case using techniques borrowed from [32]. We note that in the spline element method, it is possible to impose the continuity requirements for a conforming finite element subspace of . However for a smooth solution, continuity was enough for numerical evidence of convergence.
3.3. Error estimate for 2D conforming approximation of a smooth solution
In this section, we will assume that is a two dimensional domain. Put and . Let be a conforming finite element subspace of , be a conforming finite element subspace of with approximation properties
(3.8) 
for all for some .
For example, in this paper, we take as the spline space of degree and smoothness
where denotes the space of polynomials of degree in two variables and denotes the triangulation of the domain . It is known that [43], for and , there exists a linear quasiinterpolation operator mapping into the spline space and a constant such that if is in the Sobolev space
(3.9) 
for . For our purposes, and . If is convex, the constant depends only on and on the smallest angle in . In the non convex case, depends only on the Lipschitz constant associated with the boundary of . It is also known c.f. [30] that the full approximation property for spline spaces holds for certain combinations of and on special triangulations. We have the following theorem
Theorem 3.3.
Assume that problem (1.1) has a solution hence in by Sobolev embedding, then the discrete problem, find , on such that
(3.10) 
has a unique solution in a neighborhood of where is an interpolation operator associated with . Moreover is at least O().
The proof of the above theorem follows the combined fixed point and linearization method used in [32]. The difference here is the assumption that the solution is smooth and the use of an inverse inequality. We start with some preliminaries.
We consider the linear problem: Find such that
(3.11) 
for .
Since such that
The existence and uniqueness of a solution to (3.11) follows immediately from LaxMilgram lemma. Similarly, there exists a unique solution to the problem: Find such that
(3.12) 
We note that the constant above depends on and that since is assumed convex, by elliptic regularity.
We define a bilinear form on by
(3.13) 
and for any , on , we define by
(3.14) 
Since the solution of the above problems exists and is in , , on . A fixed point of the nonlinear operator corresponds to a solution of (3.10) and conversely if is a solution of (3.10), then is a fixed point of . We will show that has a unique fixed point in a neighborhood of . Put
First, we note that
(3.15) 
Lemma 3.4.
There exists such that
Proof.
Put . We have using ,
Now, let be a mollifier of . We have
since as . We conclude that
using the coercivity of the bilinear form with a constant which depends on and and an inverse estimate. ∎
Lemma 3.5.
There exists and such that is a contraction mapping in the ball with a contraction factor 1/2.
Proof.
For , and , let and denotes mollifiers for and respectively.
where as . We have for some ,
where as . Put . We have
As , we obtain
By coercivity and an inverse estimate,
First choose such that then . The result follows ∎
Proof.
(of Theorem 3.3) Let . We first show that maps into itself. For ,
By the Brouwer’s fixed point theorem, has a unique fixed point in which is , i.e. . Next,
for sufficiently small. We have the result. ∎
3.4. Vanishing moment methodology
The vanishing moment methodology approach to (1.1), consists in computing a solution of the singular perturbation problem
(3.16) 
It is an analogue of a singular perturbation problem
which was addressed numerically in [47] and also in the spline element method [6]. The analogy holds as many properties of the Laplace operator have a counterpart for the MongeAmpère operator.
The Newton’s iterates in the vanishing moment formulation consisting in solving the sequence of problems: Find with on
(3.17) 
Formally as approaches 0, the solution of the above problem degenerates to the solution of (3.6), a result which will be illustrated numerically in the next section.
4. Numerical results
The first two subsections are devoted to two dimensional and three dimensional numerical results respectively. The three methods are compared on three test functions for 2D experiments.
Test 1: A smooth solution so that and on .
Test 2: A solution not in , so that and on .
Test 3: No exact solution is known. Solutions are either convex or concave. Here and .
We give numerical evidence of the robustness of the spline element method for the singular perturbation problem associated to the vanishing moment methodology. Formally as approaches 0, the problem (3.16) degenerates to the problem (1.1), which can be solved by Newton’s method when a smooth solution exists. We show here numerically that the solution of (3.17) converges to that of (3.6) as approaches 0. Unlike [34], there is no boundary layers issue with the spline element method. These results are illustrated in Tables 3 and 9.
In general, vanishing moment (with Newton) gives Newton’s method result for . In particular, Newton’s method diverges for the non smooth solutions of Test 2 and 3. However with large, which implies that the equation solved is much further from the actual problem, divergence can be avoided in the vanishing moment methodology. We refer to the problem (1.1) as reduced in the tables.
In the two dimensional case of Test 3, both concave and convex solutions can be computed by either changing the initial guess or the structure of the approximations.

Newton’s method: initial guess ,

Vanishing moment: and initial guess ,

BFO iterative method: .
Since Newton’s method diverges for Test 3, we illustrate this feature of the method on a fourth test on a nonsquare domain. This also helps contrast with finite difference methods.
Test 4: The domain is the unit circle which is discretized with a Delanauy triangulation with 824 triangles and the test functions are (convex) and (concave) with the corresponding right hand side and boundary conditions.
Since none of the methods perform convincingly on Test 2 in the spline element framework, the methods are tested for the three dimensional case on two other test functions analogues of Test 1 and Test 3. We only consider the performance of the BFO and vanishing moment method.
Test 5: so that and on .
Test 6: and .
The initial guess of the Newton’s iterations is the solution of the Poisson equation .
We also illustrate the performance of the 3D BFO method on a degenerate MongeAmpère equation,
Test 7: and