Risk measure is a fundamental concept in finance and in the insuranceindustry. It is used to adjust life insurance rates. In this article,we will study dynamic risk measures by means of backward stochasticVolterra integral equations (BSVIEs) with jumps. We prove a comparisontheorem for such a type of equations. Since the solution of aBSVIEs is not a semimartingale in general, we will discuss some particularsemimartingale issues.
We are interested in Pontryagin’s stochastic maximum principle of controlled McKean–Vlasov stochastic differential equations. We allow the law to be anticipating, in the sense that, the coefficients (the drift and the diffusion coefficients) depend not only of the solution at the current time t, but also on the law of the future values of the solution PX(t+δ)" role="presentation">PX(t+δ), for a given positive constant δ" role="presentation">δ. We emphasise that being anticipating w.r.t. the law of the solution process does not mean being anticipative in the sense that it anticipates the driving Brownian motion. As an adjoint equation, a new type of delayed backward stochastic differential equations (BSDE) with implicit terminal condition is obtained. By using that the expectation of any random variable is a function of its law, our BSDE can be written in a simple form. Then, we prove existence and uniqueness of the solution of the delayed BSDE with implicit terminal value, i.e. with terminal value being a function of the law of the solution itself.
The purpose of this paper is to study the following topics and the relation between them: (i) Optimal singular control of mean-field stochastic differential equations with memory; (ii) reflected advanced mean-field backward stochastic differential equations; and (iii) optimal stopping of mean-field stochastic differential equations. More specifically, we do the following: (1) We prove the existence and uniqueness of the solutions of some reflected advanced memory backward stochastic differential equations; (2) we give sufficient and necessary conditions for an optimal singular control of a memory mean-field stochastic differential equation (MMSDE) with partial information; and (3) we deduce a relation between the optimal singular control of an MMSDE and the optimal stopping of such processes.
We study methods for solving stochastic control problems of systems offorward–backward mean-field equations with delay, in finite and infinite time horizon.Necessary and sufficient maximum principles under partial information are given. The resultsare applied to solve a mean-field recursive utility optimal problem.
We prove a maximum principle of optimal control of stochastic delay equations on infinite horizon. We establish first and second sufficient stochastic maximum principles as well as necessary conditions for that problem. We illustrate our results with an application to the optimal consumption rate from an economic quantity.
The purpose of this paper is two-fold: We extend the well-known relation between optimal stopping and randomized stopping of a given stochastic process to a situation where the available information flow is a sub-filtration of the filtration of the process. We call these problems optimal stopping and randomized stopping with partial information. Following an idea of Krylov [K] we introduce a special singular stochastic control problem with partial information and show that this is also equivalent to the partial information optimal stopping and randomized stopping problems. Then we show that the solution of this singular control problem can be expressed in terms of (partial information) variational inequalities, which in turn can be rewritten as a reflected backward stochastic differential equation (RBSDE) with partial information.
We consider the problem of optimal singular control of a stochastic partial differential equation (SPDE) with space-mean dependence. Such systems are proposed as models for population growth in a random environment. We obtain sufficient and necessary maximum principles for such control problems. The corresponding adjoint equation is a reflected backward stochastic partial differential equation (BSPDE) with space-mean dependence. We prove existence and uniqueness results for such equations. As an application we study optimal harvesting from a population modelled as an SPDE with space-mean dependence.
In this paper we study the mean-field backward stochastic differential equations (mean-field bsde) of the form
dY (t) = −f(t, Y (t), Z(t), K(t, ·), E[ϕ(Y (t), Z(t), K(t, ·))])dt + Z(t)dB(t) + R R0 K(t, ζ)N˜(dt, dζ),
where B is a Brownian motion, N˜ is the compensated Poisson random measure. Under some mild conditions, we prove the existence and uniqueness of the solution triplet (Y, Z, K). It is commonly believed that there is no comparison theorem for general mean-field bsde. However, we prove a comparison theorem for a subclass of these equations.When the mean-field bsde is linear, we give an explicit formula for the first component Y (t) of the solution triplet. Our results are applied to solve a mean-field recursive utility optimization problem in finance.
Our purpose of this paper is to study stochastic control problems for systems driven by mean-field stochastic differential equations with elephant memory, in the sense that the system (like the elephants) never forgets its history. We study both the finite horizon case and the infinite time horizon case. In the finite horizon case, results about existence and uniqueness of solutions of such a system are given. Moreover, we prove sufficient as well as necessary stochastic maximum principles for the optimal control of such systems. We apply our results to solve a mean-field linear quadratic control problem. For infinite horizon, we derive sufficient and necessary maximum principles. As an illustration, we solve an optimal consumption problem from a cash flow modelled by an elephant memory mean-field system.
We consider the problem of optimal control of a mean-field stochasticdifferential equation (SDE) under model uncertainty. The model uncertaintyis represented by ambiguity about the law LðXðtÞÞ of the stateX(t) at time t. For example, it could be the law LPðXðtÞÞ of X(t) withrespect to the given, underlying probability measure P. This is the classicalcase when there is no model uncertainty. But it could also be thelaw LQðXðtÞÞ with respect to some other probability measure Q or,more generally, any random measure lðtÞ on R with total mass 1. Werepresent this model uncertainty control problem as a stochastic differentialgame of a mean-field related type SDE with two players. Thecontrol of one of the players, representing the uncertainty of the lawof the state, is a measure-valued stochastic process lðtÞ and the controlof the other player is a classical real-valued stochastic process u(t).This optimal control problem with respect to random probability processeslðtÞ in a non-Markovian setting is a new type of stochastic controlproblems that has not been studied before. By constructing a newHilbert space M of measures, we obtain a sufficient and a necessarymaximum principles for Nash equilibria for such games in the generalnonzero-sum case, and for saddle points in zero-sum games. As anapplication we find an explicit solution of the problem of optimal consumptionunder model uncertainty of a cash flow described by amean-field related type SDE.
The classical maximum principle for optimal stochastic control states that if a control û is optimal, then the corresponding Hamiltonian has a maximum at u = û. The first proofs for this result assumed that the control did not enter the diffusion coefficient. Moreover, it was assumed that there were no jumps in the system. Subsequently, it was discovered by Shige Peng (still assuming no jumps) that one could also allow the diffusion coefficient to depend on the control, provided that the corresponding adjoint backward stochastic differential equation (BSDE) for the first-order derivative was extended to include an extra BSDE for the second-order derivatives. In this paper, we present an alternative approach based on Hida-Malliavin calculus and white noise theory. This enables us to handle the general case with jumps, allowing both the diffusion coefficient and the jump coefficient to depend on the control, and we do not need the extra BSDE with second-order derivatives. The result is illustrated by an example of a constrained linear-quadratic optimal control.
We consider a problem of optimal control of an infinite horizon system governed by forward–backward stochastic differential equations with delay. Sufficient and necessary maximum principles for optimal control under partial information in infinite horizon are derived. We illustrate our results by an application to a problem of optimal consumption with respect to recursive utility from a cash flow with delay.
Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore, classical methods, such as dynamic programming, cannot be used to study optimal control problems for such equations. However, we show that using Malliavin calculus, it is possible to formulate modified functional types of maximum principle suitable for such systems. This principle also applies to situations where the controller has only partial information available to base her decisions upon. We present both a Mangasarian sufficient condition and a Pontryagin-type maximum principle of this type, and then, we use the results to study some specific examples. In particular, we solve an optimal portfolio problem in a financial market model with memory.
By a memory mean-field process we mean the solution X(⋅)" role="presentation">X(⋅) of a stochastic mean-field equation involving not just the current state X(t) and its law L(X(t))" role="presentation">L(X(t)) at time t, but also the state values X(s) and its law L(X(s))" role="presentation">L(X(s)) at some previous times s<t." role="presentation">s<t. Our purpose is to study stochastic control problems of memory mean-field processes. We consider the space M" role="presentation">M of measures on R" role="presentation">R with the norm ||⋅||M" role="presentation">||⋅||M introduced by Agram and Øksendal (Model uncertainty stochastic mean-field control. arXiv:1611.01385v5, [2]), and prove the existence and uniqueness of solutions of memory mean-field stochastic functional differential equations. We prove two stochastic maximum principles, one sufficient (a verification theorem) and one necessary, both under partial information. The corresponding equations for the adjoint variables are a pair of (time-advanced backward stochastic differential equations (absdes), one of them with values in the space of bounded linear functionals on path segment spaces. As an application of our methods, we solve a memory mean–variance problem as well as a linear–quadratic problem of a memory process.
We study optimal control of stochastic Volterra integral equations(SVIE) with jumps by using Hida-Malliavin calculus.
• We give conditions under which there exist unique solutions ofsuch equations.
• Then we prove both a sufficient maximum principle (a verificationtheorem) and a necessary maximum principle via Hida-Malliavincalculus.
• As an application we solve a problem of optimal consumptionfrom a cash flow modelled by an SVIE.
In this paper, we are interested by advanced backward stochastic differential equations (ABSDEs), in a probability space equipped with a Brownian motion and a single jump process, with a jump at time τ. ABSDEs are BSDEs where the driver depends on the future paths of the solution. We show, that under immersion hypothesis between the Brownian filtration and its progressive enlargement with τ, assuming that the conditional law of τ is equivalent to the unconditional law of τ, and a Lipschitz condition on the driver, the ABSDE has a solution.