Abstract
We consider a class of optimal advertising problems under uncertainty for the introduction of a new product into the market, on the line of the seminal papers of Vidale and Wolfe (Oper Res 5:370–381, 1957) and Nerlove and Arrow (Economica 29:129–142, 1962). The main features of our model are that, on one side, we assume a carryover effect (i.e. the advertisement spending affects the goodwill with some delay); on the other side we introduce, in the state equation and in the objective, some mean field terms that take into account the presence of other agents. We take the point of view of a planner who optimizes the average profit of all agents, hence we fall into the family of the so-called “Mean Field Control” problems. The simultaneous presence of the carryover effect makes the problem infinite dimensional hence belonging to a family of problems which are very difficult in general and whose study started only very recently, see Cosso et al. [Ann Appl Probab 33(4):2863–2918, 2023]. Here we consider, as a first step, a simple version of the problem providing the solutions in a simple case through a suitable auxiliary problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since the seminal papers of [16, 18] on dynamics model in marketing, a considerable amount of work has been devoted to problems of optimal advertising, both in monopolistic and competitive settings, and both in deterministic and stochastic environments (see [6] for a review of the existing work until the 1990’s).
Various extensions of the basic setting of [16, 18] have been studied. For the stochastic case, we recall, among the various papers on the subject, [12, 14, 15, 17].
Our purpose here is to start exploring a family of models that put together two important features that may arise in such problems and that have not yet been satisfactorily treated in the actual theory on optimal control.
On one side we account, as in [7, 8] for the presence of delay effects, in particular the fact that the advertisement spending affects the goodwill with some delay, the so-called carryover effect (see e.g. [6, 8, 13] and the references therein).
On the other side, and more crucially, we take into account the fact that the agents maximizing their profit/utility from advertising are embedded in an environment where other agents act and where the action of such other agents influences their own outcome (see e.g. [15] for a specific case of such a situation). To model such interaction among maximizing agents, one typically resorts to game theory. However, cases like this, where the number of agents can be quite large (in particular if we hink of web advertising), are very difficult to treat in an N-agents game setting. A way to make such a problem tractable but still meaningful is to resort to what is called the mean-field theory. The idea is the following: assume that the agents are homogeneous (i.e. displaying the same state equations and the same objective functionals) and send their numbers to infinity. The resulting limit problem is in general more treatable, and, under certain conditions, its equilibria are a good approximation of the N-agents game (see e.g. the books [2] for an extensive survey on the topic).
For the above reason, we think it is interesting, both from the mathematical and economic side, to consider the optimal advertising investment problem with delay of [7, 8] in the case when, in the state equation and in the objective, one adds a mean field term depending on the law of the state variable (the goodwill), which takes into account the presence of other agents.
There are two main ways of looking at the problem when such mean field terms are present. One (which falls into the class of Mean Field Games (MFG), see e.g. [2, Ch. 1], and which is not our goal here) is to look at the Nash equilibria where each agent takes the distribution of the state variables of the others as given. The other one, which we follow here, is to assume a cooperative game point of view: there is a planner that optimizes the average profit of each agent: this means that we fall into the family of the so-called “Mean Field Control” (MFC) problems (or “control of McKean–Vlasov dynamics”). We believe that both viewpoints are interesting from the economic side and challenging from the mathematical side. In particular, the one we adopt here (the Mean Field Control) can be seen as a benchmark (a first best) to compare, subsequently, with the non-cooperative Mean Field Game case, as is typically done in game theory (see e.g. [1]). It can also be seen as the case of a big selling company (who acts as the central planner), which has many shops in the territory whose local advertising policies interact.
The simultaneous presence of the carryover effect and of the “Mean Field Control” terms makes the problem belong to the family of infinite dimensional control of McKean–Vlasov dynamics: a family of problems that are very difficult in general and whose study started only very recently (see [3]).
Here we consider, as a first step, a simple version of the problem that displays a linear state equation, mean field terms depending only on the first moments, and an objective functional whose integrand (the running objective) is separated in the state and the control. We develop the infinite dimensional setting in this case. Moreover, we show that, in the special subcase when the running objective is linear in the state and quadratic in the control, we can solve the problem. This is done through the study of a suitable auxiliary problem whose HJB equation can be explicitly solved (see Sect. 4 below) and whose optimal feedback control can be found through an infinite dimensional Verification Theorem (see Sect. 4.3 below).
The paper is organized as follows.
-
In Sect. 2, we formulate the optimal advertising problem as an optimal control problem for stochastic delay differential equations with mean field terms and delay in the control. Moreover, using that the mean field terms depends only on the first moments we introduce an auxiliary problem without mean field terms but with a “mean” constraint on the control (see (2.13)).
-
In Sect. 3, the above “not mean field” auxiliary non-Markovian optimization problem is “lifted” to an infinite dimensional Markovian control problem, still with a “mean” constraint on the control (see (3.7)).
-
In Sect. 4, we show how to solve the original problem in the special case when the optimal controls of the original and auxiliary problems are deterministic. We explain the strategy in Sect. 4.1, proving Proposition 4.1. Then we consider a suitable Linear Quadratic (LQ) case. In Sect. 4.2, we solve the appropriated HJB equation, while, in Sect. 4.3, we find, through a verification theorem, the solution of the auxiliary LQ problem. Finally, in Sect. 4.4, we show that we can use Proposition 4.1) to also get the solution of the original LQ problem.
2 Formulation of the problem
We call X(t) the stock of advertising goodwill (at time \(t \in [0,T]\)) of a given product. We assume that the dynamics of \(X(\cdot )\) is given by the following controlled stochastic delay differential equation (SDDE), where u models the intensity of advertising spending:
where the Brownian motion W is defined on a filtered probability space \((\Omega ,\mathcal {F},\mathbb {F}=(\mathcal {F}_t)_{t\ge 0},\mathbb {P})\), with \((\Omega ,\mathcal {F},\mathbb {P})\) being complete, \(\mathbb {F}\) being the augmentation of the filtration generated by W, and where, for a given closed interval \(U\subset \mathbb {R}\), the control strategy u belongs to \(\mathcal {U}:=L^2_\mathcal {P}(\Omega \times [0,T];U)\), the space of U-valued square integrable progressively measurable processes. The last line in (2.1) must read as an extension of u to \([-d,T]\) by means of \(\delta \).
Here the control space and the state space are both equal to the set \(\mathbb {R}\) of real numbersFootnote 1 Regarding the coefficients and the initial data, we assume the following conditions are verified:
Assumption 2.1
-
(i)
\(a_0,a_1\in \mathbb {R}\);
-
(ii)
\(b_0 \ge 0\);
-
(iii)
\(b_1(\cdot ) \in L^2([-d,0];\mathbb {R}^+)\);
-
(iv)
\(\delta (\cdot )\in L^2([-\delta ,0];U)\).
Here \(a_0\) and \(a_1\) are constant factors reflecting the goodwill changes in absence of advertising, \(b_0\) is a constant advertising effectiveness factor, and \(b_1(\cdot )\) is the density function of the time lag between the advertising expenditure u and the corresponding effect on the goodwill level. Moreover, x is the level of goodwill at the beginning of the advertising campaign, \(\delta (\cdot )\) is the history of the advertising expenditure before time zero (one can assume \(\delta (\cdot )=0\), for instance).
Notice that under Assumption 2.1 there exists a unique strong solution to the following SDDE starting at time \(t\in [0,T)\):
We denote such a solution by \(X^{t,x,u}\). It belongs \( L^2_\mathcal {P}(\Omega \times [0,T],\mathbb {R})\). In what follows, without loss of generality, we always assume to deal with a continuous version \(X^{t,x,u}\).
The objective functional to be maximized is defined as
where for the functions \( f:[0,T]{\times }\mathbb {R}\times \mathbb {R} \rightarrow \mathbb {R}\) and \( g:\mathbb {R}\times \mathbb {R} \rightarrow \mathbb {R}\) we assume the following Assumption 2.2 is verified.
Assumption 2.2
-
(i)
The functions f, g are measurable.
-
(ii)
There exist \(N>0,{\ell }>0, \theta >1\) such that
$$\begin{aligned} f(t,x,m ,u,z ) + g(x,m ) \le N(1+|x|+|m |+|u|+|z |)-{\ell }(|u|+|z |)^\theta , \end{aligned}$$for all \(t\in [0,T],y\in \mathbb {R},m \in \mathbb {R},z \in \mathbb {R}\).
-
(iii)
f, g are locally uniformly continuous in x, m, uniformly with respect to (t, u, z), meaning that for every \(R>0\) there exists a modulus of continuity \(\texttt{w}_R:\mathbb {R}^+\rightarrow \mathbb {R}^+\) such that
$$\begin{aligned}&\sup _{\begin{array}{c} t\in [0,T]\\ u\in \mathbb {R},z \in \mathbb {R} \end{array}}|f(t,x,m ,u,z )-f(t,x',m ',u,z )|+|g(x,m )-g(x',m ')|\\&\quad \le \texttt{w}_R (|x-x'|+|m -m '| ) \end{aligned}$$for all real numbers \(x,m,x',m '\) such that \(|x|\vee |m |\vee |x'|\vee |m '|\le R\).
Under Assumptions 2.1 and 2.2, the reward functional J in (2.3) is well-defined for any \((t,x;u(\cdot ))\in [0,T]\times \mathbb {R}^+\times \mathcal {U}\).
We also define the value function \( \overline{V}\) for this problem as follows:
for \((t,x)\in [0,T]\times \mathbb {R}\). We shall say that \(u^* \in \mathcal {U}\) is an optimal control strategy if it is such that
Our main aim here is to finding such optimal control strategies
We now take into account the controlled ordinary delay differential equation (ODDE)
where \(m\in \mathbb {R}\) and \(z\in L^2([0,T],\mathbb {R})\) is extended to \([-d,0]\) by \(\delta \) as expressed by the last line in (2.5). We denote by \(M^{t,m,z}\) the unique strong solution to (2.5). It is straightforward to notice the relationship
Property (2.6) suggests that we can couple the two systems (2.2) and (2.5) as follows. We set
and introduce, for \(\tilde{x}\in \mathbb {R}^2\) and with
the process \(\tilde{X}^{t,\tilde{x},\tilde{u}}\) as the unique strong solution of the controlled SDDE
then by (2.2), (2.5), (2.5), and (2.9), we immediately have
Property (2.10) states that the process \(X^{t,x,u}\) can be seen as the first projection of a bidimensional process driven by a SDDE whose coefficients do not involve any dependence on the law.
Thanks to (2.10), we can rephrase the original control problem as follows. We define, for \(t\in [0,T],\tilde{x}\in \mathbb {R}^2\), and for
the functional
where, with a slight abuse of notation, we identify
Then, by (2.3), (2.4), (2.10), and (2.11), it follows that
3 Carryover effect of advertising: reformulation of the problem in infinite dimension
To recast the SDDE (2.9) as an abstract stochastic differential equation on a suitable Hilbert space we use the approach introduced first by [19] in the deterministic case and then extended in [8] to the stochastic case (see also [5, 6], [paragraph 2.6.8.2], [10] and [11], and [12] where the case of unbounded control operator is considered). We reformulate Eq. (2.9) as an abstract stochastic differential equation in the following Hilbert space H
If \(y\in H\), we denote by \(y_0\) the projection of y onto \(\mathbb {R}^2\) and by \(y_1\) the projection of y onto \(L^2([-d,0],\mathbb {R}^2)\). Hence \(y=(y_0,y_1)\). The inner product in H is induced by its factors, meaning
In particular, the induced norm is
Recalling (2.7), we define \(A:\mathcal {D}(A)\subset H\rightarrow H\) by
where the domain \(\mathcal {D}(A)\) is
The adjoint \(A^*:\mathcal {D}(A^*)\subset H\rightarrow H\) of A is given by
with
The operator A generates a \(C_0\)-semigroup \(\{e^{tA}\}_{t\in \mathbb {R}^+}\) on H, where
whereas the \(C_0\)-semigroup \(\{e^{tA^*}\}_{t\in \mathbb {R}^+}\) generated by \(A^*\) is given by
where \(A^*_0\) is the adjoint of \(A_0\).
We then introduce the noise operator \(G:\mathbb {R}\rightarrow H\) defined by
and the control operator \(B:\mathbb {R}^2\rightarrow H\) defined by
The adjoint \(B^*:H\rightarrow \mathbb {R}^2\) of B is given by
We now introduce the abstract stochastic differential equation on H
with \(t\in [0,T), y\in H, \tilde{u}\in \mathcal {U}\times \mathcal {U}\). Denote by \(Y^{t,y,\tilde{u}}\) the mild solution to (3.1), i.e., the pathwise continuous process in \(L^2_\mathcal {P}(\Omega \times [0,T];H)\) given by the variation of constants formula:
Similarly as done in [7], if the space of admissible controls is restricted to \( \tilde{\mathcal {U}}\), one can show that (3.1) is equivalent to (2.9), in the sense that
for every \(t\in [0,T),\tilde{u}\in \tilde{\mathcal {U}}\), and for every \(y=(y_0,y_1)\in H\) with
A further equivalence is given by considering together (2.10) and (3.4), that provide
Thanks to equivalence (3.5), we can rephrase the original control problem as follows. For \(t\in [0,T],y\in H, \tilde{u}\in \mathcal {U}\times \mathcal {U}\), define the functional (recall (2.12))
Then, by (2.11), (2.13), (3.3), and (3.4), it follows that
4 Solution of the original problem in a special Linear Quadratic (LQ) case
4.1 The strategy of solution through a suitable HJB equation
Following (3.7) above we introduce the function
defined by
Notice that, by (3.7), we have
The problem with the above constraint \(z(s)=\mathbb {E}[u(s)]\), for \(s\in [t,T]\), is that it does not allow to apply directly the Dynamic Programming Approach to get the HJB equation. For this reason, instead of optimizing on the set \(\mathcal {U}\) with the constraints \(z(s)=\mathbb {E}[u(s)]\; s\in [t,T]\), we take into consideration a different problem, for which the optimization is performed on the set \(\mathcal {U}\times \mathcal {U}\) with the constraint \(z(s)=u(s)\ s\in [t,T]\), hence considering the following value function
In general we do not know if and how this function is related to \(\mathcal {V}\) (and consequently to our goal \(\overline{V}\)). However it is clear from the constraints involved that, if for both problems V and \(\mathcal {V}\) the supremum is reached on the set of deterministic controls, meaning
then finding the deterministic optimal controls for \(\mathcal {V}\) is equivalent to doing that for V. For future reference, we restate this observation in the following proposition.
Proposition 4.1
Let \(t\in [0,T]\) and \(y\in H\). If (4.3a) and (4.3b) hold true, then a deterministic control \(\tilde{u}^*=(u^*,u^*)\in \mathcal {U}\times \mathcal {U}\) is optimal for \(\mathcal {V}\) if and only if it is optimal for V.
The HJB equation associated to the optimal control problem related to V is the following.
where \(Q=G^*G\), and the Hamiltonian function defined as
with \(H_{CV}\) denoting the current value Hamiltonian function, and \(\textbf{D}\) being the diagonal in \(U\times U\), meaning \(\textbf{D}= \left\{ (u,u):u\in U \right\} \). Notice that \(H_0(t,y,p)\) depends on p only by means of \(B^*p\). Indeed, if we define
we get \(H_0(t,y,p)=H(t,y,B^*p)\). Then (4.4) can be rewritten as
Notice that, in the above Eqs. (4.4) and (4.6), the gradient inside the Hamiltonian H is indeed a couple of directional derivatives since it acts only through the operator \(B^*\) whose image lies in \(\mathbb {R}^2\).
In the next subsections we specify f, g and we show that with such a choice (4.3a) and (4.3b) are verified.
4.2 Explicit solution of the HJB equation in the auxiliary LQ case
In this section we specify the general model with
for \((x,m,u,z)\in \mathbb {R}^4\), where
-
(i)
\(\alpha _0,\alpha _1,\beta _0,\beta _1,\lambda _0, \lambda _1\in \mathbb {R}\);
-
(ii)
\(\gamma _0>0, \gamma _1>0\).
We also set \(U=\mathbb {R}\). Notice that Assumption 2.2 is satisfied. Moreover, denoting \(\tilde{\alpha }=(\alpha _0,-\alpha _1)\), \(\tilde{\beta }=(\beta _0,\beta _1)\), and recalling (2.12), we have, for \(q\in \mathbb {R}^2\),
which entails, by considering the definition of H given in (4.5),
and then the HJB equation (4.4) reads as
where \(\tilde{\lambda }=(\lambda _0,-\lambda _1)\).
We look for solutions of (4.9) of the following form
with \(a:[0,T]\rightarrow H\) and \(b:[0,T]\rightarrow \mathbb {R}\) to be determined. The final condition in (4.9) holds true for (4.10) only if
Moreover, if v is of the form (4.10), (4.9) reads as
The previous Eq. (4.12) is to be intended in a mild way that we are going to specify in the following, since we cannot guarantee that, for all t, \(a(t)\in \mathcal{D}(A^*)\). Indeed, by (4.11), \(a(T)\notin \mathcal{D}(A^*)\).
Equation (4.12) can be aligned into two equations by isolating the terms containing y and all the other terms, namely
and
Taking into account that (4.13) must hold for all \(y\in H\), and combining (4.13) and (4.14) with the final conditions (4.11), we obtain two separated equations, one for a and one for b, namely
and
We solve (4.15), which turns out to be an abstract evolution equation in H, in mild sense, getting
Consequently we can write the solution to (4.16)
where a is given by (4.17).
So far we have found a solution v to the HJB equation (4.9) whose candidate optimal feedback is deterministic. In the next section we will prove that it is indeed the optimal control and that \(v=V\). We will also prove that the optimal feedback control associated to the optimal control problem associated to \(\mathcal {V}\) is deterministic. This will allow us to apply Proposition 4.1, so finding the optimal strategies for the initial problem in the linear quadratic case.
4.3 Fundamental identity and verification theorem in the auxiliary LQ case
The aim of this subsection is to provide a verification theorem and the existence of optimal feedback controls for the linear quadratic problem for V introduced in the previous section. This, in particular, will imply that the solution in (4.10), with a and b given respectively by (4.17) and (4.18), coincides with the value function of our optimal control problem V defined in (4.2).
The main tool needed to get the wanted results is an identity [often called “fundamental identity”, see Eq. (4.19)] satisfied by the solutions of the HJB equation. Since the solution (4.10) is not smooth enough (it is not differentiable with respect to t due to the presence of \(A^*\) in a, given by (4.17)), we need to perform an approximation procedure thanks to which Ito’s formula can be applied. Finally we pass to the limit and obtain the needed “fundamental identity”.
Proposition 4.2
Let Assumption 2.1 hold. Let v be as in (4.10), with a and b given respectively by (4.17) and (4.18), solution of the HJB equation(4.9). Then for every \(t\in [ 0,T],\, y\in H\), and \(\tilde{u}=(u,z)\in \mathcal {U}\times \mathcal {U}\), with \(u=z\), we have the fundamental identity
Proof
Let \(t\in [0,T), y\in H, \tilde{u}=(u,z)\in \mathcal {U}\times \mathcal {U}\), \(u=z\). We should apply Ito’s formula to the process \( \left\{ e^{-rs}v(s,Y^{t,y,\tilde{u}}(s)) \right\} _{s\in [t,T]}\), but we cannot, because \(Y^{t,y,\tilde{u}}\) is a mild solution (the integrals in (3.2) are convolutions with a \(C_0\)-semigroup) and not a strong solution of (3.1), moreover v is not differentiable in t, since \((\tilde{\lambda },0)\not \in D(A^*)\). Then we approximate \(Y^{t,y,\tilde{u}}\) by means of the Yosida approximation (see also [10, Proposition 5.1]). For \(k_0\in \mathbb {N}\) large enough, the operator \(k-A\), \(k\ge k_0\), is full-range and invertible, with continuous inverse, and \(k(k-A)^{-1}A\) can be extended to a continuous operator on H. Define, for \(k\ge k_0\), the operator on H
It is well known that, as \(k\rightarrow \infty \), \(e^{tA_k}y'\rightarrow e^{tA}y'\) in H, uniformly for \(t\in [0,T]\) and for \(y'\) on compact sets of H. Since \(A_k\) is continuous, there exists a unique strong solution \(Y^{t,y,\tilde{u}}_k\) to the SDE on H
By taking into account (3.2) together with the same formula with \(A_k\) in place of A, and by recalling the convergence \(e^{\cdot A_k}\rightarrow e^{\cdot A}\) mentioned above, one can easily show that
We now take into consideration the HJB
As argued for (4.9), a solution for (4.22) is given by
where
and
Since \(A_k^*\in L(H)\), both \(a_k\) and \(b_k\) belong to \(C^{1}([0,T];\mathbb {R})\). So we can apply Ito’s formula to \( \left\{ e^{-r(s-t)}v^{(k)}(s, Y_k^{t,y,\tilde{u}}(s)) \right\} _{s\in [t,T]}\) getting:
Since \(v^{(k)}\) is a solution to Eq. (4.22), we get
We then let \(k\rightarrow \infty \) in (4.26). Recalling the convergence \(e^{\cdot A_k}\rightarrow e^{\cdot A}\) mentioned above, we first notice that
Then (4.26), (4.27), and (4.21) entail
or
Finally, adding and subtracting
we get
\(\square \)
We can now pass to prove a verification theorem i.e. a sufficient condition of optimality given in term of the solution v of the HJB equation.
Theorem 4.3
Let Assumption 2.1 hold true. Let v be in (4.10), with a and b given respectively by (4.17) and (4.18), solution to the HJB equation (4.9). Then the following holds.
-
(i)
For all \((t,y)\in [0,T]\times H\) we have \(v(t,y) \ge V(t,y)\), where V is the value function defined in (4.2).
-
(ii)
Let \(t\in [0,T],y\in H\). If \(u^*\) is as in (4.8), and if \(\tilde{u}^*(s):=(u^*(B^*a(s)),u^*(B^*a(s)))\), \(s\in [t,T]\), then the pair \((\tilde{u}^*,Y^{t,y,\tilde{u}^*})\) is optimal for the control problem (4.2), and \(V(t,y)=v(t,y)=\mathcal {J}(t,y;\tilde{u}^*)\).
Proof
The first statement follows directly by (4.19) due to the positivity of the integrand. Concerning the second statement, we immediately see that, when \(\tilde{u}=\tilde{u}^*\), (4.19) becomes \(v(t,y)=\mathcal {J}(t,y;\tilde{u}^*)\). Since we know that, for any admissible control \(\tilde{u}=(u,z)\in \mathcal {U}\times \mathcal {U}\) with \(u=z\),
the claim immediately follows. \(\square \)
4.4 Equivalence with the original problem in the LQ case
To find the solution of the original problem in the LQ case we need to apply Proposition 4.1, i.e. to prove that the optimal control in the original LQ case is deterministic. This is the subject of next proposition.
Proposition 4.4
Condition (4.3a) is verified.
Proof
Let \(t\in [0,T],y\in H\). Let \(\tilde{u}=(u,z)\in \mathcal {U}\), with \(z(s)=\mathbb {E}[u(s)]\) for \(s\in [t,T]\). Let \(\tilde{u}_\mathbb {E}= (\mathbb {E}[u],z)\). Then
Notice, by (3.2), that
Then
which implies (4.3a). \(\square \)
Corollary 4.5
Let f, g be as in (4.7). Let \(t\in [0,T],x\in \mathbb {R}\). If \(u^*\) is as in (4.8), with (x, x) in place of \(y_0\), then \(u^*(B^*a(s))\) is optimal for \(\overline{V}(t,x)\).
Proof
The statement is a straightforward consequence of (4.1), Proposition 4.1, Theorem 4.3. \(\square \)
Data availibility
Data sharing not applicable to this article as no datasets were generated or analysed during the current study
Notes
This means that, due to the difficulty of the problem, we do not consider ex ante state or control constraints. They could be checked ex post or could be the subject of a subsequent research work.
References
Boucekkine, R., Fabbri, G., Federico, S., Gozzi, F.: A dynamic theory of spatial externalities. In: Games and Economic Behavior, vol. 132(C), pp. 133–165. Elsevier, Amsterdam (2022)
Carmona, R., Delarue, F.: Probabilistic Theory of Mean Field Games with Applications. I Probab. Theory Stoch. Model., vol. 83. Springer, Cham, xxv+713 pp (2018)
Cosso, A., Gozzi, F., Kharroubi, I., Pham, H., Rosestolato, M.: Optimal control of path-dependent McKean–Vlasov SDEs in infinite-dimension. Ann. Appl. Probab. 33(4), 2863–2918 (2023)
de Feo, F.: Stochastic optimal control problems with delays in the state and in the control via viscosity solutions and an economical application, ar**v:2308.14506
Fabbri, G., Gozzi, F., Swiech, A.: Stochastic Optimal Control in Infinite Dimensions: Dynamic Programming and HJB Equations. Springer, Berlin (2017)
Feichtinger, G., Hartl, R., Sethi, S.: Dynamical optimal control models in advertising: recent developments. Manag. Sci. 40, 195–226 (1994)
Gozzi, F., Marinelli, C.: Stochastic optimal control of delay equations arising in advertising models. In: Stochastic Partial Differential Equations and Applications— VII. Lect. Notes Pure Appl. Math., vol. 245, pp. 133–148. Chapman & Hall/CRC, Boca Raton (2006)
Gozzi, F., Marinelli, C., Savin, S.: On controlled linear diffusions with delay in a model of optimal advertising under uncertainty with memory effects. J. Optim. Theory Appl. 142(2), 29–321 (2009)
Gozzi, F., Masiero, F.: Stochastic optimal control with delay in the control, I: solving the HJB equation through partial smoothing. SIAM J. Control Optim. 55(5), 2981–3012 (2017)
Gozzi, F., Masiero, F.: Stochastic optimal control with delay in the control, II: Verification theorem and optimal feedback controls. SIAM J. Control Optim. 55(5), 3013–3038 (2017)
Gozzi, F., Masiero, F.: Stochastic control problems with unbounded control operators: solutions through generalized derivatives. SIAM J. Control Optim. 61(2), 586–619 (2023)
Grosset, L., Viscolani, B.: Advertising for a new product introduction: a stochastic approach. Top 12(1), 149–167 (2004)
Hartl, R.F.: Optimal dynamic advertising policies for hereditary processes. J. Optim. Theory Appl. 43(1), 51–72 (1984)
Marinelli, C.: The stochastic goodwill problem. Eur. J. Oper. Res. 176(1), 389–404 (2007)
Motte, M., Pham, H.: Optimal bidding strategies for digital advertising. ar**v:2111.08311
Nerlove, M., Arrow, J.K.: Optimal advertising policy under dynamic conditions. Economica 29, 129–142 (1962)
Prasad, A., Sethi, S.P.: Competitive advertising under uncertainty: a stochastic differential game approach. J. Optim. Theory Appl. 123(1), 163–185 (2004)
Vidale, M.L., Wolfe, H.B.: An operations-research study of sales response toadvertising. Oper. Res. 5, 370–381 (1957)
Vinter, R.B., Kwong, R.H.: The infinite time quadratic control problem for linear systems with state and control delays: an evolution equation approach. SIAM J. Control Optim. 19(1), 139–153 (1981)
Acknowledgements
Federica Masiero is a member of INDAM-GNAMPA.
Funding
Open access funding provided by Università degli Studi di Milano - Bicocca within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There are no Conflict of interest.
Ethical approval
We do not work with any empirical data. For this reason, we are not aware of any ethical issues that could arise within this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gozzi, F., Masiero, F. & Rosestolato, M. An optimal advertising model with carryover effect and mean field terms. Math Finan Econ (2024). https://doi.org/10.1007/s11579-024-00361-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11579-024-00361-3
Keywords
- Mean field control problems
- Optimal advertising models
- Delay in the control
- Infinite dimensional reformulation