1 Introduction

Advances in nanofabrication techniques have enabled an unprecedented degree of precision and control in producing a wide variety of solid state materials and devices in the form of atomically thin films and multilayers [63]. For ferromagnetic materials, this control offers opportunities to develop novel principles of information processing and storage based on spintronics—an emergent discipline of electronics in which both the electric charge and the quantum mechanical spin of an electron are harnessed [5]. In addition to the present day use of spin valves as magnetic field sensors in hard-disk drive read heads [69], some more recent applications of spintronic technology include domain wall logic and computing [2, 49, 61], magnetoresistive random access memory [3, 22, 53, 60, 70] and racetrack memory [58].

In a typical domain wall device, a bit of information is encoded using the position and polarity of a head-to-head wall along a thin, long ferromagnetic nanostrip. By “head-to-head”, one understands a magnetization configuration in which the magnetization points along the strip axis, but in the opposite directions at the opposite extremes of the strip [15]. The structure of such a domain wall in soft ferromagnets rather sensitively depends on the ratio of the strip thickness and width to the characteristic length scale of the ferromagnetic material (the exchange length \(\ell _\textrm{ex} = \sqrt{2 A / (\mu _0 M_s^2)}\), where A is the exchange stiffness, \(M_s\) is the saturation magnetization and \(\mu _0\) is vacuum permeability [30]). Depending on the film thickness, one observes two basic types of walls—the transverse and the vortex wall—for thinner and thicker films, respectively. This picture was first established numerically by McMichael and Donahue via micromagnetic simulations [50], and later corroborated by Kläui et al. through experimental studies in ferromagnetic nanorings [35, 44] (for reviews, see [34, 65]). Furthermore, as was shown numerically by Nakatani, Thiaville and Miltat [55], there exist at least two types of transverse domain walls: symmetric and asymmetric walls. Finally, winding domain walls in which the magnetization rotates by a 360\(^{\circ }\) angle in the film plane are also known to exist in ferromagnetic nanostrips [33, 41, 68]. These types of domain wall profiles, obtained numerically using the method from [52], are illustrated in Fig. 1.

Fig. 1
figure 1

Domain wall profiles in the numerical simulations of amorphous cobalt nanostrips: (a) vortex head-to-head wall in a 100 nm wide and 5 nm thick strip; (b) symmetric transverse head-to-head wall in a 50 nm wide and 2 nm thick strip; (c) asymmetric head-to-head wall in a 400 nm wide and 5 nm thick strip; (d) a winding transverse domain wall in a 400 nm wide and 5 nm thick strip. The material parameters are: exchange constant \(A = 1.4 \times 10^{-11}\) J/m, saturation magnetization \(M_s = 1.4 \times 10^6\) A/m, and zero magnetocrystalline anisotropy or applied magnetic field [45]. For this material, the exchange length is \(\ell _\textrm{ex} = 3.37\) nm

1.1 Micromagnetic framework

The mathematical understanding of domain wall profiles in ferromagnets rests on the micromagnetic modeling framework, whereby the magnetization configurations representing these profiles are viewed as local or global minimizers of the micromagnetic energy functional [30, 43]. This framework has been successfully used to characterize a great variety of domain walls and other magnetization configurations (for an overview, see [16]; for some more recent developments, see [13, 20, 31, 32, 36, 47, 48, 54]). However, head-to-head domain walls pose a fundamental challenge to micromagnetic modeling and analysis, since these magnetization configurations carry a non-zero magnetic charge, which may lead to divergence of the wall energy in infinite samples due to singular behaviors of the stray field [48]. To date, there have been only a handful of micromagnetic studies of such charged domain walls [27, 28, 37, 39, 40, 47, 48].

In [39, 40], Kühn studied head-to-head domain walls in cylindrical nanowires of radius \(R > 0\). These walls are viewed as global minimizers of the energy

$$\begin{aligned} {\mathcal {E}}(m) := \frac{1}{2} \int _\Sigma |\nabla m|^2 \hbox {d}^3r + \frac{1}{2} \int _{{\mathbb {R}}^3} |\nabla u|^2 \hbox {d}^3r, \end{aligned}$$
(1.1)

where \(m \in H^1_{loc}(\Sigma ; {\mathbb {S}}^2)\), \(\Sigma = \Sigma _R:= {\mathbb {R}} \times B_R(0) \subset {\mathbb {R}}^3\), and \(u \in \mathring{H}^1({\mathbb {R}}^3)\) is the magnetostatic potential solving

$$\begin{aligned} \Delta u = \nabla \cdot m \end{aligned}$$
(1.2)

distributionally in \({\mathbb {R}}^3\), with m extended by zero to \({\mathbb {R}}^3 \backslash \Sigma \). The magnetization m is subject to the condition at infinity

$$\begin{aligned} m(x, y, z) \rightarrow (\pm 1, 0, 0) \qquad \text {as} \qquad x \rightarrow \pm \infty , \end{aligned}$$
(1.3)

in some average sense (for a recent discussion of variational principles of micromagnetics, see [17]). Kühn considered existence of minimizers of \({\mathcal {E}}\) in a suitable class of magnetizations m for which (1.3) holds, as well as a number of their characteristics depending on R. In particular, she showed that as \(R \rightarrow 0\) the domain wall profile is expected to converge, in an appropriate sense, to that of a one-dimensional transverse wall, which is given explicitly, up to translations along the x-axis and rotations in the yz-plane, by

$$\begin{aligned} m(x, y, z) = \left( \tanh (x / \sqrt{2}), \text {sech} (x / \sqrt{2}), 0 \right) . \end{aligned}$$
(1.4)

Existence and convergence of minimizers were later established by Harutyunyan for general cylindrical domains \(\Sigma = {\mathbb {R}}\times \Omega \), where \(\Omega \subset {\mathbb {R}}^2\) is a bounded domain with a \(C^1\) boundary [28] (see also [62]). In [27], Harutyunyan also studied the behavior of the limit energy when \(\Omega \) is a rectangle with a large aspect ratio and obtained an additional logarithmic factor in the scaling of the optimal energy (for sharp asymptotics, see [23]).

In the case of \(\Sigma = \Sigma _R\) with \(R > 0\) sufficiently small, the analysis mentioned above is enabled by the fact that as \(R \rightarrow 0\) the magnetization becomes essentially constant in the yz-plane, allowing to asymptotically reduce the energy to \({\mathcal {E}}(m) \simeq {\mathcal {E}}_0^\textrm{1d}({\bar{m}})\), where \({\bar{m}}(x, y, z):= \displaystyle \lim _{R \rightarrow 0} \left( {1 \over \pi R^2} \int _{B_R(0)} m(x, y', z') \, \hbox {d}y' \hbox {d}z' \right) \) and

$$\begin{aligned} {\mathcal {E}}_0^\textrm{1d}({\bar{m}}) := \int _{\Sigma _R} \left( \frac{1}{2} |\nabla {\bar{m}}|^2 + \frac{1}{4} \left( 1 - {\bar{m}}_1^2 \right) \right) \hbox {d}^3r, \end{aligned}$$
(1.5)

whose minimizers among all \({\bar{m}} \in \mathring{H}^1(\Sigma _R; {\mathbb {S}}^2)\) with \({\bar{m}} = {\bar{m}}(x)\) satisfying (1.3) are given by (1.4), up to translations and rotations in the yz-plane. The latter follows from the fact that the limit energy \({\mathcal {E}}_0^\textrm{1d}\) in (1.5) is fully local, and its minimizers satisfy a simple ordinary differential equation that can be solved explicitly. The situation becomes much more complicated for general values of \(R > rsim 1\) or for general cross-sections \(\Omega \), since in that case the Euler–Lagrange equation for the minimizers of \({\mathcal {E}}\) is a system of nonlinear partial differential equations whose explicit solution is no longer available. In particular, it is not known whether or not the minimizers could exhibit winding, whereby the magnetization rotates by an integer multiple of 360 \(^\circ \) along the axis of the wire, as, e.g., in Fig. 1d.

1.2 Thin films

In the absence of exact solutions and in view of the interest from applications, one can alternatively focus on the case of asymptotically thin films, i.e., for \(\delta \ll 1\) to consider the energy \({\mathcal {E}}_\delta (m)\) given by \({\mathcal {E}}(m)\) in (1.1), in which \(\Sigma = \Sigma _\delta := {\mathbb {R}} \times (0, \delta ) \times (0, w_\delta )\). Here \(\delta > 0\) is the film thickness and \(w_\delta > 0\) is the film width, respectively, both in the units of the exchange length, with the dependence of \(w_\delta \) on \(\delta \) as \(\delta \rightarrow 0\) to be specified. Notice that if \(\Sigma _\delta \) were a bounded domain with the lateral extent of order \(w_\delta \), then from the results of Kohn and Slastikov [38] one could conclude that the full micromagnetic energy \({\mathcal {E}}\) asymptotically reduces to \({\mathcal {E}}(m) \simeq {\mathcal {E}}_0^\textrm{2d}({\bar{m}})\), where \({\bar{m}}(x, y, z):= \displaystyle \lim _{\delta \rightarrow 0} \left( {1 \over \delta } \int _0^\delta m(x, y, z') \, \hbox {d}z' \right) \) such that \({\bar{m}}_3 = 0\) and

$$\begin{aligned} {\mathcal {E}}_0^\textrm{2d}({\bar{m}}) := \frac{1}{2} \int _{\Sigma _\delta } |\nabla {\bar{m}}|^2 \hbox {d}^3r + {\gamma \over w_\delta } \int _{\Gamma _\delta } ({\bar{m}} \cdot \nu )^2 \hbox {d} \mathcal H^2, \end{aligned}$$
(1.6)

where \(\Gamma _\delta \) is the portion of the boundary of \(\Sigma _\delta \) associated with the film edge and \(\nu \) is the outward unit normal to \(\Gamma _\delta \), provided that

$$\begin{aligned} w_\delta = {4 \pi \gamma \over \delta \ln \delta ^{-1}} \end{aligned}$$
(1.7)

for some \(\gamma > 0\) fixed, as \(\delta \rightarrow 0\).

Rescaling all lengths in the film plane with \(w_\delta \) and writing \({\bar{m}} = (\cos \theta , \sin \theta )\), we then formally have \({\mathcal {E}}(m) \simeq {\mathcal {F}}(\theta ) \delta \), where

$$\begin{aligned} {\mathcal {F}}(\theta ) := \frac{1}{2} \int _{\Sigma _0} |\nabla \theta |^2 \hbox {d}^2r + \gamma \int _{\partial \Sigma _0} \sin ^2 \theta \, \hbox {d}{\mathcal {H}}^1, \end{aligned}$$
(1.8)

and \(\Sigma _0:= {\mathbb {R}} \times (0,1) \) denotes an infinite strip of unit width, and \(\theta \in C^1({\overline{\Sigma }}_0)\), for example. As expected, in this scaling regime the contribution of the stray field to the energy localizes to become a nonlinear boundary penalty term, greatly simplifying the otherwise highly nonlocal problem for the domain wall profiles.

1.3 Domain walls in strips

We would like to emphasize that finding the domain wall profile in a strip does not reduce to solving an ordinary differential equation for the magnetization angle, as in the case of thin ferromagnetic wires discussed earlier. Instead, the problem may be reduced to a fractional differential equation. To see this, let us formally reduce the minimization problem for \({\mathcal {F}}\) to the problem for the trace of \(\theta \) on \(\partial \Sigma _0\) (for details, see Appendix A). It is easy to see that any minimizer of \({\mathcal {F}}\) in the form of a domain wall must be reflection-symmetric with respect to the midline of \(\Sigma \). Hence for a given trace \({\bar{\theta }} \in C^\infty ({\mathbb {R}})\) of \(\theta \) on \(\partial \Sigma _0\) such that

$$\begin{aligned} {\bar{\theta }}(x) = k_1 \pi \quad \forall x < -R, \qquad \qquad {\bar{\theta }}(x) = k_2 \pi \quad \forall x > R \end{aligned}$$
(1.9)

for some \(R > 0\) and \(k_1, k_2 \in {\mathbb {Z}}\) we can minimize the Dirichlet integral by choosing \(\theta \) to be the harmonic extension of \({\bar{\theta }}\). A direct computation then shows that \(\mathcal F(\theta ) = 2 \bar{{\mathcal {F}}}({\bar{\theta }})\), where

$$\begin{aligned} \bar{{\mathcal {F}}}({\bar{\theta }}) := \frac{1}{4} \int _{\mathbb R} \int _{{\mathbb {R}}} K(x - x') ({\bar{\theta }}(x) - {\bar{\theta }}(x'))^2 \hbox {d}x \, \hbox {d}x' + \gamma \int _{{\mathbb {R}}} \sin ^2 {\bar{\theta }}(x) \, \hbox {d}x, \end{aligned}$$
(1.10)

in which the symmetric, positive definite kernel

$$\begin{aligned} K(x) := {\pi \cosh ( \pi x) \over \sinh ^2 (\pi x)} \end{aligned}$$
(1.11)

has the same singularity at the origin as the kernel \(K_0(x):= {1 \over \pi x^2}\) generating \((-d^2/\hbox {d}x^2)^{1/2}\) [19] and decays exponentially at infinity.

The Euler–Lagrange equation corresponding to \(\bar{{\mathcal {F}}}\) reads as

$$\begin{aligned} \frac{1}{2} \int _{{\mathbb {R}}} \big ( 2 {\bar{\theta }}(x) - {\bar{\theta }}(x - \xi ) - {\bar{\theta }} (x + \xi ) \big ) K(\xi ) \hbox {d} \xi + \gamma \sin 2 {\bar{\theta }}(x) = 0 \qquad \forall x \in {\mathbb {R}}. \end{aligned}$$
(1.12)

A solution of this equation is of domain wall type if it satisfies the conditions at infinity that are obtained from (1.9) by sending \(R \rightarrow \infty \):

$$\begin{aligned} \lim _{x \rightarrow -\infty } {\bar{\theta }}(x) = k_1 \pi , \qquad \lim _{x \rightarrow +\infty } {\bar{\theta }}(x) = k_2 \pi . \end{aligned}$$
(1.13)

We note that it is by no means certain that minimizing the energy in (1.10) among functions satisfying (1.9) with a distinct pair of \(k_1, k_2 \in {\mathbb {Z}}\) that is prescribed would result in a minimizer satisfying (1.13). The main challenge here is that due to lack of compactness the minimizing sequences satisfying (1.9) may consist of multiple transition layers running off to infinity while separating from each other.

The problem in (1.12) and (1.13) is reminiscent of the fractional Ginzburg-Landau equation, which is the Euler–Lagrange equation for (1.10) with \(K = K_0\) and a double-well potential in the last term. This problem is known to exhibit transition layer solutions connecting the two minima of the potential [10, 12, 57]. Note that when \(\gamma \gg 1\), minimizers of \(\bar{{\mathcal {F}}}\) are expected to concentrate on the \(O(\gamma ^{-1})\) length scale, so after a rescaling we have \(K = K_0\) to the leading order in \(\gamma ^{-1}\). In this problem, all domain wall type solutions of (1.12) were obtained by Toland [66] as

$$\begin{aligned} {\bar{\theta }}(x) = \pm \arctan 2 \gamma x + {\pi \over 2}, \end{aligned}$$
(1.14)

up to translations and additions of integer multiples of \(\pi \). In particular, for \(K = K_0\) the only domain wall solutions have the form of head-to-head walls which do not exhibit winding.

It is natural to expect that the head-to-head domain wall profiles minimizing \({\mathcal {E}}\) with \(\Sigma = {\mathbb {R}} \times (0, \delta ) \times (0, w_\delta ) \) in the regime of \(\delta \ll 1\) and \(w_\delta \) given by (1.7) with \(\gamma \gg 1\) consist of magnetizations rotating in the film plane in the form of two symmetric boundary vortices on the opposite sides of the strip, consistently with the heuristics presented in [64]. Alternatively, when \(\gamma \ll 1\), one would expect the minimizers of \(\bar{{\mathcal {F}}}\) to vary on an \(O(\gamma ^{-1/2})\) scale, for which one can approximate \(x^2 K(x) \simeq \delta (x)\), where \(\delta (x)\) is the Dirac delta-function (cf. also [8]). In this case (1.12) would reduce to an ordinary differential equation

$$\begin{aligned} {\hbox {d}^2 {\bar{\theta }}(x) \over \hbox {d}x^2} = 2 \gamma \sin 2 {\bar{\theta }}(x) \qquad \forall x \in {\mathbb {R}}, \end{aligned}$$
(1.15)

whose all domain wall type solutions are \({\bar{\theta }}(x) = \pm 2 \, \arctan \left( \hbox {e}^{2 \sqrt{\gamma } \, x} \right) \), up to translations and additions of integer multiples of \(\pi \). After a suitable rescaling and a possible reflection, these correspond to the profile in (1.4). Once again, the obtained limiting solution does not exhibit winding.

The minimization of the energy (1.10) could in principle be carried out directly, yielding existence and properties of minimizers for (1.8). The situation becomes more complicated, however, in the presence of an applied external field \(h > 0\) along the strip, which amounts to an extra Zeeman term [30] added to the energy in (1.1),

$$\begin{aligned} {\mathcal {E}}(m) := \frac{1}{2} \int _\Sigma |\nabla m|^2 \hbox {d}^3r + h \int _\Sigma \left( 1 - m_1 \right) \hbox {d}^3 r + \frac{1}{2} \int _{\mathbb R^3} |\nabla u|^2 \hbox {d}^3r, \end{aligned}$$
(1.16)

after subtracting a suitable additive constant. At the level of the limit thin film energy in (1.8), this translates into

$$\begin{aligned} {\mathcal {F}}(\theta ) := \frac{1}{2} \int _{\Sigma _0} |\nabla \theta |^2 \hbox {d}^2r + h \int _{\Sigma _0} (1 - \cos \theta ) \, \hbox {d}^2 r + \gamma \int _{\partial \Sigma _0} \sin ^2 \theta \, \hbox {d}{\mathcal {H}}^1, \end{aligned}$$
(1.17)

and clearly one could no longer explicitly minimize the first two terms in the energy for a given trace \({\bar{\theta }}\), as this would involve solving a nonlinear partial differential equation for \(\theta \) in \(\Sigma _0\). Instead, we will work directly with the energy in (1.17) and study its minimizers for \(h \ge 0\) that connect distinct equilibrium solutions \(\theta = \textrm{const}\) as \(x \rightarrow \pm \infty \).

1.4 Informal discussion of results

We first focus on (1.17) and establish existence of energy minimizers that connect distinct equilibria at \(x = \pm \infty \), using the direct method of calculus of variations. As was already mentioned, the difficulty here is the fact that the problem is posed on an unbounded domain and, therefore, a priori minimizing sequences may fail to converge to a function that has the right limiting behavior at infinity. We overcome this difficulty by proving monotonicity of the minimizers on larger and larger truncated domains with prescribed Dirichlet data at the left and the right ends of the truncated strip. Taking the limit of the sequence of truncated minimizers, after suitable translations, we obtain a limiting monotone function. Combining this fact with the knowledge of the behavior at infinity for functions with bounded energy (1.17) (see Lemma 3.1), we show that this limiting function is non-trivial and has the appropriate limiting behavior at infinity. By lower semicontinuity of the energy, we subsequently conclude that the obtained limit is the desired minimizer.

Notice that the Euler–Lagrange equation form the energy in (1.17) is reminiscent of problems arising in the studies of front solutions in infinite cylinders, on which there exists an extensive literature. For example, when \(\gamma = 0\) and \(h > 0\) the existence and qualitative properties of such solutions were established in [7]. An additional challenging aspect of the considered problem is the fact that the bistable nonlinearity enabling existence of the front solutions is concentrated on the domain boundary (for several studies of problems of this kind, see e.g. [4, 10, 14, 29]; this list is certainly not exhaustive). To deal with this challenge, we develop a set of tools to address the problems with boundary nonlinearities based on maximum and comparison principles and the sliding method. Using these tools, we completely classify the critical points corresponding to domain wall solutions and establish regularity, symmetry, uniqueness, monotonicity and decay properties of the domain wall profiles. In particular, we show that after reflections, translations and shifts in \(\theta \), all domain wall solutions associated with (1.17) are the energy minimizers that connect two distinct equilibria at infinity with no winding for \(h = 0\) (symmetric \(180\,^\circ \) walls) or the same equilibrium at infinity with exactly one rotation for \(h > 0\) (symmetric \(360\,^\circ \) walls). We also establish the explicit limit behaviors of the minimizers in the limiting regimes of \(\gamma \rightarrow 0\) and \(\gamma \rightarrow \infty \) when \(h = 0\).

We finally relate the minimization problem associated with (1.17) with that of the original micromagnetic problem associated with (1.16). We note that due to the unbounded domain the standard approach of [11, 38] cannot be applied, and, therefore, new asymptotic tools to control the stray field interaction of distant points need to be developed. To address this issue, we introduce a reduced thin film micromagnetic energy functional that is appropriate for modeling ultrathin ferromagnetic films in which the ferromagnetic layer has thicknesses down to a few atomic layers and, strictly speaking, the macroscopic energy functional in (1.16) is no longer applicable. This two-dimensional reduced thin film energy functional retains the nonlocal character of the micromagnetic energy in (1.16) in the ultrathin film regime and was introduced by us earlier in the studies of exchange biased films [47]. It represents an intermediate level of modeling between the full three-dimensional micromagnetic energy in (1.16) and the two-dimensional thin film limit energy in (1.17). Notice that the latter formally coincides with the one identified by Kohn and Slastikov [38]. We prove the \(\Gamma \)-convergence of the reduced thin film energy [to be introduced in the following section, see (2.10)] to the limit thin film energy in (1.17), together with compactness and convergence of the respective energy minimizers as the film thickness goes to zero. Importantly, we also prove that at small but finite film thickness the non-trivial energy minimizers of the reduced thin-film energy (2.10) remain close in a certain sense to the unique minimizers of the limit problem associated with (1.17). In particular, they exhibit the same head-to-head (for \(h=0\)) or winding (for \(h>0\)) behavior.

1.5 Organization of the paper

In Section 2, we state precisely the variational problems to be analyzed and the main results of the paper. In particular, the basic existence and qualitative properties of the domain wall profiles for the limit thin film problem are presented in Theorem 2.3, a complete characterization of all domain wall profiles of the limit problem is given in Theorem 2.6, and convergence of the minimizers in the regimes of large and small values of \(\gamma \) for \(h = 0\) is presented in Theorem 2.7. Finally, a characterization and the asymptotic behavior of minimizers of the reduced thin film energy as the film thickness vanishes is presented in Theorem 2.9. In Section 3, we present the treatment of the limit thin film energy, in which the existence result for the minimizers is given by Theorem 3.2 and the rest of the section is devoted to the proofs of Theorems 2.3, 2.6 and 2.7. We also characterize the infimum energy for the limit thin film energy in the classes of configurations with prescribed winding in Corollary 3.3. Finally, in Section 4 we prove a \(\Gamma \)-convergence result for the reduced micromagnetic thin film energy to the limit energy analyzed in Section 3 in Theorem 4.9, and then establish Theorem 2.9 via a sequence of corollaries.

2 Statement of results

We now turn to the precise statements of the main results of our paper. We begin by simplifying some of the notation. For the limit thin film energy, we drop the subscript “0” from the definition of the two-dimensional strip domain and simply write \(\Sigma := {\mathbb {R}}\times (0,1)\subset {\mathbb {R}}^2\). By \({\textbf{r}} = (x, y) \in \Sigma \) we denote a generic point in the strip, with \(x \in {\mathbb {R}}\) and \(y \in (0,1)\). On the strip \(\Sigma \) we introduce a local space \(H^1_{l}(\Sigma )\) consisting of functions whose restrictions to truncated strips \(Q_R:= (-R, R) \times (0,1)\) belong to \(H^1(Q_R)\) for any \(R > 0\). We equip \(H^1_{l}(\Sigma )\) with the notion of convergence corresponding to the \(H^1(Q_R)\) convergence of the restrictions to \(Q_R\). This space plays the role of the space \(H^1_{loc}(\Sigma )\) of locally Sobolev functions that allows to make sense of the traces of functions on \(\partial \Sigma \) in the \(L^2_{loc}(\partial \Sigma )\) sense.

For \(h \ge 0\), \(\gamma > 0\) and \(\theta \in H^1_{l}(\Sigma )\) the thin film limit energy

$$\begin{aligned} F(\theta ):=\int _\Sigma \left( \frac{1}{2}|\nabla \theta |^2+h(1-\cos \theta )\right) \, \hbox {d}^2 r +\gamma \int _{\partial \Sigma }\sin ^2\theta \, \hbox {d}{\mathcal {H}}^1 \end{aligned}$$
(2.1)

defines a map \(F: H^1_{l}(\Sigma ) \rightarrow [0, +\infty ]\), provided the last term in (2.1) is understood in the sense of trace. Notice that the Euler–Lagrange equation associated with (2.1) is

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta =h\sin \theta &{} \text {in }\Sigma ,\\ \partial _\nu \theta =-\gamma \sin (2\theta )&{} \text {on }\partial \Sigma , \end{array}\right. } \end{aligned}$$
(2.2)

where \(\partial _\nu \theta \) denotes the derivative of \(\theta \) in the direction of the outward normal \(\nu \) to \(\partial \Sigma \). The weak form of (2.2) is

$$\begin{aligned}{} & {} \int _{\Sigma }(\nabla \theta \cdot \nabla \varphi + h\sin (\theta )\, \varphi )\, \hbox {d}^2 r+\gamma \int _{\partial \Sigma }\sin (2\theta ) \, \varphi \, \hbox {d}{\mathcal {H}}^1=0 \nonumber \\{} & {} \qquad \forall \varphi \in H^1_l({\Sigma }) \text { with bounded support}. \end{aligned}$$
(2.3)

Remark 2.1

By Lemma 3.4 below, any bounded weak solution to (2.2), i.e., any \(\theta \in H^1_l(\Sigma ) {\cap L^\infty (\Sigma )}\) satisfying (2.3) belongs to \(C^\infty ({\overline{\Sigma }})\) and thus is a classical solution of (2.2). Therefore, throughout the paper we will not distinguish between weak and strong formulations of the problem.

Next, for \(k \in {\mathbb {Z}}\) we introduce a class of functions

$$\begin{aligned} \begin{aligned}{} {\mathcal {A}}_k:=\Bigl \{\theta \in H^1_{l}(\Sigma )&:\, \lim _{x\rightarrow +\infty }\Vert \theta (x, \cdot )\Vert _{L^2(0,1)}=0, \\{}&{} \lim _{x\rightarrow -\infty }\Vert \theta (x, \cdot )-k\pi \Vert _{L^2(0,1)}=0 \Bigr \}, \end{aligned} \end{aligned}$$
(2.4)

where \(\theta (x, \cdot )\) is understood as a trace. These functions correspond to the in-plane magnetization profiles \(m = (\cos \theta , \sin \theta )\) connecting \(\theta (x, y) = 0\) at \(x = +\infty \) with \(\theta (x, y)= k \pi \) at \(x = -\infty \) in an average sense. For the limit energy F, we are then interested in the following variational problem:

$$\begin{aligned} \text {minimize} \ F(\theta )\ \text {over} \ \theta \in {\mathcal {A}}_k \ \text {with} \ k \not =0 \ \text {fixed}. \end{aligned}$$
(2.5)

Remark 2.2

Note that if \(\theta \in {\mathcal {A}}_k\), then \(-\theta \in {\mathcal {A}}_{-k}\) with \(F(\theta )=F(-\theta )\). In particular, for every \(k\in {\mathbb {N}}\) we have

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_k}F(\theta )= \inf _{\theta \in {\mathcal {A}}_{-k}}F(\theta ). \end{aligned}$$

In view of the previous remark, we may restrict ourselves to the case \(k\in {\mathbb {N}}\) in (2.5).

Our first result concerns existence, uniqueness and qualitative properties of the minimizers of F in \({\mathcal {A}}_k\).

Theorem 2.3

Let \(\gamma > 0\), \(h \ge 0\) and \(k \in {\mathbb {N}}\). Then a minimizer \(\theta _{min}\) of F over \({\mathcal {A}}_k\) exists if and only if \(k = 1\) for \(h = 0\), or if and only if \(k = 2\) for \(h > 0\). The minimizer is unique up to translations along the x direction, belongs to \(C^\infty ({\overline{\Sigma }})\) with derivatives of all orders bounded and satisfies (2.2) classically. In addition, for all \((x, y) \in {\overline{\Sigma }}\), the minimizer \(\theta _{min}\) satisfies

  1. (a)

    (strict monotone decrease) \(\partial _x \theta _{min}(x, y) < 0\);

  2. (b)

    (symmetry) \(\theta _{min}(x,y)=\theta _{min}(x,1-y)\) and \(\theta _{min}(x,y)=k \pi -\theta _{min}(a-x,y)\) for some \(a \in {\mathbb {R}}\);

  3. (c)

    (exponential decay at infinity) for every \(m\in {\mathbb {N}}\) there exist positive constants \(\alpha _m\), \(\beta _m\) such that

    $$\begin{aligned}{} & {} \Vert \theta _{min}-k \pi \Vert _{C^m((-\infty , -t] \times [0,1])}\le \alpha _m \textrm{e}^{-\beta _m t}\quad \text {and}\\{} & {} \Vert \theta _{min}\Vert _{C^m([t, +\infty ) \times [0,1])}\le \alpha _m \textrm{e}^{-\beta _m t} \end{aligned}$$

    for all \(t>0\) sufficiently large.

Our next result characterizes all domain wall type solutions for the limit thin film model, i.e., all bounded solutions of (2.2) that attain distinct pointwise limits as \(x \rightarrow \pm \infty \). More precisely, we introduce the following definition:

Definition 2.4

Let \(\theta \in C^2(\Sigma )\cap C^1({\overline{\Sigma }}) \cap L^\infty (\Sigma )\) be a solution of (2.2). We say that \(\theta \) is a domain wall solution if there exist \(\ell ^-, \ell ^+\in {\mathbb {R}}\), \(\ell ^->\ell ^+\), such that

$$\begin{aligned} \lim _{x\rightarrow -\infty }\theta (x, y)=\ell ^-\quad \text {and}\quad \lim _{x\rightarrow +\infty }\theta (x, y)=\ell ^+ \qquad \text { for all } y\in (0,1). \end{aligned}$$
(2.6)

Remark 2.5

We make several observations regarding the above definition.

  1. (a)

    The condition \(\ell ^->\ell ^+\) is assumed without loss of generality, as otherwise we can replace \(\theta (x, y)\) with \(\theta (-x, y)\) in all the statements.

  2. (b)

    If \(\theta \) is a domain wall solution in the sense of Definition 2.4 and k any integer, then \(\theta +2k\pi \) is a domain wall solution as well. If additionally \(h=0\), so is also \(\theta +k\pi \).

  3. (c)

    By Lemma 3.4, any bounded weak solution (2.3) is smooth up to the boundary with derivatives of all orders bounded, and, therefore, it solves (2.2) classically. In particular, domain wall solutions in the sense of Definition 2.4 belong to \(C^\infty ({\overline{\Sigma }})\), and their derivatives of all orders are bounded. Moreover, by the same lemma, the convergence to \(\ell ^\pm \) in (2.6) holds in fact in a much stronger sense, namely uniformly with respect to the \(C^m\)-norm, for every \(m\in {\mathbb {N}}\), see (3.27).

  4. (d)

    If \(h=0\), or if \(h>0\) and \(F(\theta )<+\infty \), then condition (2.6) can be replaced (see Lemma 3.5) by the following one:

    $$\begin{aligned} \lim _{x\rightarrow -\infty }\theta (x, 0)= & {} \lim _{x\rightarrow -\infty }\theta (x, 1)=\ell ^-\quad \text {and}\nonumber \\ \lim _{x\rightarrow +\infty }\theta (x, 0)= & {} \lim _{x\rightarrow +\infty }\theta (x, 1) =\ell ^+. \end{aligned}$$
    (2.7)

We also note that in view of Remark 2.5(c) the functions \(\theta (x, y) = \ell ^\pm \) must themselves solve (2.2). Hence, a priori we should have \(\ell ^\pm \in {\pi \over 2} {\mathbb {Z}}\) when \(h = 0\) and \(\ell ^\pm \in \pi {\mathbb {Z}}\) when \(h > 0\).

We now state the theorem about domain wall type solutions. In essence, our next result shows that the only domain wall type critical points of F are the minimizers obtained in Theorem 2.3, up to a reflection and an addition of a multiple of \(\pi \).

Theorem 2.6

Let \(\gamma > 0\) and \(h \ge 0\), let \(\theta \) be a domain wall solution in the sense of Definition 2.4, and let \(\theta _{min}\) be as in Theorem 2.3. Then the following uniqueness properties hold true:

  1. (a)

    If \(h=0\), then there exist \(k\in {\mathbb {Z}}\) and \(\lambda \in {\mathbb {R}}\) such that \(\ell ^+=k\pi \), \(\ell ^-=(k+1)\pi \), and for every \((x, y) \in {\overline{\Sigma }}\)

    $$\begin{aligned} \theta {(x, y)}=\theta _{min}({x + \lambda , y})+k\pi ; \end{aligned}$$
  2. (b)

    If \(h>0\), then there exist \(k\in {\mathbb {Z}}\) and \(\lambda \in {\mathbb {R}}\) such that \(\ell ^+=2k\pi \), \(\ell ^-=(2k+2)\pi \), and for every \((x, y) \in {\overline{\Sigma }}\)

    $$\begin{aligned} \theta {(x, y)} =\theta _{min}({x + \lambda , y})+2k\pi . \end{aligned}$$

Before turning to the relation between the thin limit model in (2.1) and the micromagnetic energy, we also consider the asymptotic behavior of the domain wall solutions for both \(\gamma \ll 1\) and \(\gamma \gg 1\). In view of Theorem 2.6, it is sufficient to consider the minimizers of F in the appropriate function classes. For simplicity of presentation, we will only consider the most interesting case \(h = 0\), as the case \(h > 0\) may be treated analogously, albeit without an explicit limiting solution when \(\gamma \rightarrow \infty \).

Theorem 2.7

For \(\gamma > 0\) and \(h = 0\), let \(\theta _{min,\gamma }\) be the unique minimizer of F over \({\mathcal {A}}_1\) satisfying \({\theta _{min,\gamma }}(0,\cdot ) = {\pi \over 2}\). Then

  1. (a)

    \({\theta _{min,\gamma }}(x / \sqrt{\gamma },y) \rightarrow \pi - 2 \arctan (\hbox {e}^{2x})\) as \(\gamma \rightarrow 0\);

  2. (b)

    \({\theta _{min,\gamma }}(x, y) \rightarrow {\pi \over 2} - \arctan \left( {\sinh (\pi x) \over \sin (\pi y)} \right) \) as \(\gamma \rightarrow \infty \),

locally uniformly in \(\Sigma \).

Remark 2.8

As may be seen from the proof, the result in part a) of Theorem 2.7 also holds with respect to the \(H^1_{l}(\Sigma )\) convergence. However, the latter does not hold for part b), as the limit function fails to be in \(H^1_{l}(\Sigma )\). Finally, in the case \(h > 0\) and \(\gamma \rightarrow 0\) the limit solution is easily seen to be that of (2.2) with \(\gamma = 0\) and is, once again, one-dimensional, while as \(\gamma \rightarrow \infty \) the solution is expected to converge to a solution of the first equation in (2.2) with Dirichlet boundary condition in the form of a piecewise-constant function taking values 0 and \(2 \pi \).

Notice that the result in part b) of Theorem 2.7 provides a rigorous basis for the physical picture presented in [64] (for a closely related problem, see also [42]). In addition, Theorem 2.7 provides a rigorous counterpart for the discussion in the introduction regarding the limiting behavior of the magnetization in the strip in the limits of large and small values of \(\gamma \).

We finally turn to the relationship of the results obtained by us for the limit thin film energy in (2.1) with those for the micromagnetic energy. Notice that in the regime of interest the film thickness reaches an order of only a few atomic layers, making the use of the full three-dimensional micromagnetic energy problematic. As was argued previously, a model that is more appropriate for such ultrathin films is the reduced micromagnetic thin film energy (for a detailed discussion, see [18, 47]).

Let \(d_{\Sigma }({\textbf{r}}):= \textrm{dist}({\textbf{r}}, {\mathbb {R}}^2 \backslash \Sigma )\). For \(\varepsilon > 0\) sufficiently, small we consider the family of cutoff functions

$$\begin{aligned} \eta _{\varepsilon } ({\textbf{r}}) = \eta \left( d_{\Sigma } ({\textbf{r}}) / \varepsilon \right) , \end{aligned}$$
(2.8)

where \(\eta \in C^1([0, +\infty ))\) is such that \(\eta (0) = 0\), \(\eta '(t) \ge 0\) for all \(t \ge 0\) and \(\eta (t) = 1\) for \(t \ge 1\). Then for

$$\begin{aligned} m: \Sigma \rightarrow {\mathbb {S}}^1, \quad m = (m_1 (x,y), m_2(x,y)), \end{aligned}$$
(2.9)

such that \(m \in C^\infty ({\overline{\Sigma }}; {\mathbb {R}}^2)\) and \(m_2\) vanishes outside a compact set, we define the reduced micromagnetic energy

$$\begin{aligned} \begin{aligned} E_\varepsilon (m)=&{} \frac{1}{2} \int _{\Sigma } |\nabla m|^2\, \text{ d}^2r + \frac{\gamma }{2 |\ln \varepsilon |} \int _{\Sigma } \int _{\Sigma } \frac{\text {div} (\eta _\varepsilon m) ({{\textbf {r}}}) \, \text {div} (\eta _\varepsilon m) ({{\textbf {r}}}')}{| {{\textbf {r}}} - {{\textbf {r}}}'|} \, \text{ d}^2r \, \text{ d}^2 r' \\{}&{} + h \int _{\Sigma } (1- m_1)\, \text{ d}^2r, \end{aligned} \end{aligned}$$
(2.10)

where \(\gamma >0\) is a fixed parameter, which may be obtained from the full three-dimensional micromagnetics via a formal asymptotic reduction and a suitable rescaling of the strip width [18, 47]. The conditions on m, which we are going to relax shortly, are needed to ensure convergence of all the integrals in (2.10). In particular, it ensures that \(m_1(x, y) = \pm 1\) for all |x| large enough, corresponding to the head-to-head or winding domain wall configurations.

In (2.10), the parameter \(\varepsilon \) represents the effective dimensionless film thickness measured relative to the strip width, and \(\gamma \) is an effective stray field strength normalized by \(|\ln \varepsilon |\) [compare with (1.6)]. As was already mentioned, this energy is somewhat intermediate in the hierarchy of multiscale micromagnetic energies between the full three-dimensional micromagnetic energy in (1.1) (with the Zeeman term added) and the limit thin film energy in (2.1).

The assumptions about m above are clearly too restrictive for the existence of unconstrained minimizers of \(E_\varepsilon \). To find a more appropriate functional setting to seek the energy minimizers in the form of head-to-head or winding domain walls, we pass to the Fourier space in the nonlocal term and introduce the transform \(\mathscr {F}(\textrm{div}(\eta _\varepsilon m))\) of \(\textrm{div}(\eta _\varepsilon m) \in C^\infty _c({\mathbb {R}}^2)\),

$$\begin{aligned} {\mathscr {F}}(\textrm{div}(\eta _\varepsilon m)) (k_1, k_2) = \int _0^1 \int _{\mathbb {R}}\hbox {e}^{-i k_1 x - i k_2 y} \textrm{div} (\eta _\varepsilon (y) m(x, y)) \, \hbox {d}x \, \hbox {d}y, \end{aligned}$$
(2.11)

where \(\textrm{div}(\eta _\varepsilon m)\) was extended by zero outside \(\Sigma \). Clearly, under our assumption we have [46, Theorem 5.9]

$$\begin{aligned} \begin{aligned} \int _\Sigma \int _\Sigma \frac{\text {div} (\eta _\varepsilon m) ({{\textbf {r}}}) \, \text {div} (\eta _\varepsilon m) ({{\textbf {r}}}')}{2 \pi | {{\textbf {r}}} - {{\textbf {r}}}'|} \, \text{ d}^2r \, \text{ d}^2 r' = \int _{{\mathbb {R}}^2} {| {\mathscr {F}}(\text {div}(\eta _\varepsilon m)) |^2 \over |{{\textbf {k}}}|} {\text{ d}^2 k \over (2 \pi )^2}, \end{aligned} \end{aligned}$$
(2.12)

which is nothing but the \(\mathring{H}^{-1/2}({\mathbb {R}}^2)\) norm squared of \(\text {div}(\eta _\varepsilon m)\). Thus, under the above assumptions about m the energy \(E_\varepsilon (m)\) may be alternatively written in the form

$$\begin{aligned} \begin{aligned} E_\varepsilon (m)=&{} \frac{1}{2} \int _{\Sigma } |\nabla m|^2\, \text{ d}^2r + \frac{\gamma }{2 |\ln \varepsilon |} \int _{{\mathbb {R}}^2} {|\mathscr {F}(\text {div} (\eta _\varepsilon m)) |^2 \over 2 \pi |{{\textbf {k}}}|} \, \text{ d}^2 k \\{}&{} +\, h \int _{\Sigma } (1- m_1)\, \text{ d}^2r. \end{aligned} \end{aligned}$$
(2.13)

We now wish to relax the assumptions of smoothness of m and of \(m_2\) having compact support and introduce a more natural class of magnetizations for which the energy in (2.13) remains valid, taking advantage of positivity of the nonlocal energy term written in the Fourier space. Clearly, for \(m \in H^1_{l}(\Sigma )\) all the local terms in the energy are well defined (possibly taking the value \(+\infty \)). It remains to make sense of the nonlocal term. For that purpose, observe that for \(m \in H^1_{l}(\Sigma )\) we have (with a slight abuse of notation)

$$\begin{aligned} \begin{aligned} \text {div} (\eta _\varepsilon m)(x, y) = \eta _\varepsilon (y) \partial _x m_1(x, y) + \eta _\varepsilon (y) \partial _y m_2(x, y) + \eta _\varepsilon '(y) m_2(x, y) \end{aligned} \end{aligned}$$
(2.14)

distributionally. Therefore, under a natural condition that \(\nabla m \in L^2(\Sigma ; {\mathbb {R}}^2)\) the first two terms in the right-hand side of (2.14), extended by zero outside \(\Sigma \), belong to \(L^2({\mathbb {R}}^2)\) and thus have a well-defined Fourier transform in the \(L^2\)-sense. To make sense of the third term, we additionally assume that \(m_2 \in L^2(\Sigma )\). Thus, we introduce the class

$$\begin{aligned} {\mathfrak {M}}:= \left\{ m \in H^1_{l}(\Sigma ; {\mathbb {S}}^1) \, : \, \nabla m \in L^2(\Sigma ; {\mathbb {R}}^2), \, m_2 \in L^2(\Sigma ) \right\} , \end{aligned}$$
(2.15)

on which \(E_\varepsilon : {\mathfrak {M}}\rightarrow [0, +\infty ]\) is now well defined for all \(\varepsilon \in (0, \frac{1}{2})\). Note that the assumption \(m_2 \in L^2(\Sigma )\) for all \(m \in {\mathfrak {M}}\) forces \(m_1(x, \cdot )\) to approach \(\pm 1\) in some average sense as \(x \rightarrow \pm \infty \), thus selecting the magnetization profiles in the form of head-to-head or winding walls.

We will show the \(\Gamma \)-convergence as \(\varepsilon \rightarrow 0\) of the energy \(E_\varepsilon \) defined on \({\mathfrak {M}}\) to the following reduced energy (see Section 4):

$$\begin{aligned} E_0(m) = \frac{1}{2} \int _{\Sigma } |\nabla m|^2\, \hbox {d}^2 r + h \int _{\Sigma } (1- m_1)\, \hbox {d}^2 r + \gamma \int _{\partial \Sigma } m^2_2\, \hbox {d}{\mathcal {H}}^1. \end{aligned}$$
(2.16)

With a slight abuse of notation, when talking about the limit \(\varepsilon \rightarrow 0\) we will always imply taking a sequence of \(\varepsilon _k \rightarrow 0\) as \(k \rightarrow \infty \).

Associated with the energy in (2.16), we have the following minimization problem:

$$\begin{aligned} \begin{aligned}{}&{} \text{ minimize } \ E_0(m) \ \text{ among } \quad m=(\cos \theta , \sin \theta )\in H^1_{l}(\Sigma ; {\mathbb {S}}^1) \\{}&{} \text{ with } \theta \text{ satisfying } (3.2) \text{ for } \text{ some } k_1,\, k_2\in {\mathbb {Z}},\,k_1\ne k_2. \end{aligned} \end{aligned}$$
(2.17)

Notice that for \(m = (\cos \theta , \sin \theta )\), this energy coincides precisely with that in (2.1), and such a lifting is always possible for any \(m \in H^1_{l}(\Sigma ; {\mathbb {S}}^1)\) (see, for instance, [9]), making the energies \(E_0(m)\) and \(F(\theta )\) equivalent. The \(\Gamma \)-convergence result, stated in Theorem 4.9, is with respect to the strong \(L^2_{loc}(\Sigma )\) convergence of maps \(m_\varepsilon : \Sigma \rightarrow \mathbb S^1\). Using this \(\Gamma \)-convergence result, we can then establish existence and a characterization of the minimizers of \(E_\varepsilon \) in the form of domain walls in terms of those of \(E_0\) for all small enough \(\varepsilon \). Note that the existence and properties of the latter are established by Theorem 2.3. Also note that by Theorem 2.3 the minimizers of \(E_0\) over \(H^1_{l}(\Sigma ; \mathbb S^1)\) with suitable behaviors at infinity belong to \({\mathfrak {M}}\).

Theorem 2.9

Let \(\gamma > 0\), \(h \ge 0\) and \(k \in \mathbb N\). Then there exists \(\varepsilon _0 > 0\) such that for all \(\varepsilon \in (0, \varepsilon _0)\) there exists a minimizer \(m = (\cos \theta , \sin \theta )\) of \(E_\varepsilon \) over all \(m \in {\mathfrak {M}}\) with \(\theta \in {\mathcal {A}}_k\) if and only if \(k = 1\) when \(h = 0\), or if and only if \(k = 2\) when \(h > 0\). As \(\varepsilon \rightarrow 0\), every minimizer of \(E_\varepsilon \) above converges in \(H^1_{l}(\Sigma ; {\mathbb {R}}^2)\), after a suitable translation, to the corresponding minimizer of \(E_0\).

The above result shows that, in the considered regime of ultrathin ferromagnetic films, the domain wall-like ground states of the micromagnetic energy are head-to-head walls with no winding (\(180\,^\circ \) walls) in the absence of the applied field (\(h=0\)). When an applied field is present (\(h>0\)), the only domain wall-like ground states are winding domain walls with a single rotation (\(360\,^\circ \) walls). Furthermore, as the film thickness tends to zero these ground state profiles converge to the uniquely defined energy minimizing profiles for the limit energy \(E_0\) (up to translations). Thus, in particular our results provide a mathematical understanding for the symmetric head-to-head domain wall profiles in the absence of the applied field observed in experiments and numerical simulations of sufficiently thin nanostrips (see Fig. 1b) and the discussion in Section 1. At the same time, our analysis does not capture the asymmetric head-to-head walls observed in wider nanostrips (see Fig. 1c). The analysis of the latter would require to consider a regime in which the stray field effect does not reduce to a purely local penalty term at the sample boundary, and is outside the regime studied in this paper. Similarly, our regime excludes the appearance of the vortex walls shown in Fig. 1a.

3 Analysis of the thin film limit model

We start by recalling that for every \(m\in H^1_{l}(\Sigma ; \mathbb S^1)\) there exists \(\theta \in H^1_{l}(\Sigma )\) such that \(m=(\cos \theta , \sin \theta )\) (see, for instance, [9]), and the energy (2.16) may be rewritten as

$$\begin{aligned} E_0(m)=F(\theta ). \end{aligned}$$
(3.1)

In what follows, we identify any \(\theta \in H^1_{l}(\Sigma )\) with the precise representative such that for every \(x_0\in {\mathbb {R}}\), \(\theta (x_0, \cdot )\) coincides a.e. with the trace of \(\theta \) on the vertical line \( x=x_0\).

Lemma 3.1

Let \(\theta \in H^1_{l}(\Sigma )\) be such that \(F(\theta )<+\infty \). Then there exist \(k_1, k_2\in {\mathbb {Z}}\) such that

$$\begin{aligned} \lim _{x\rightarrow -\infty }\Vert \theta (x,\cdot )-k_1\pi \Vert _{L^{2}(0,1)}=0 \text { and } \lim _{x\rightarrow +\infty }\Vert \theta (x,\cdot )-k_2\pi \Vert _{L^{2}(0,1)}=0. \nonumber \\ \end{aligned}$$
(3.2)

Furthermore, if \(h>0\) we have \(k_1, k_2\in 2{\mathbb {Z}}\).

Proof

Set

$$\begin{aligned} {\bar{\theta }}(x):=\int _0^1\theta (x, y)\, \hbox {d}y. \end{aligned}$$

and note that \({\bar{\theta }}\in H^1_{loc}({\mathbb {R}})\) and thus, in particular, it is continuous. We claim that

$$\begin{aligned} \frac{1}{2}\int _{{\mathbb {R}}}|{\bar{\theta }}'(x)|^2\, \hbox {d}x+2\gamma \int _{{\mathbb {R}}}\sin ^2{\bar{\theta }}(x)\, \hbox {d}x\le (5+8\gamma ) F(\theta ). \end{aligned}$$
(3.3)

We start by observing that

$$\begin{aligned} 2\gamma \int _{{\mathbb {R}}}\sin ^2{\bar{\theta }}(x)\, \hbox {d}x&= 2\gamma \int _{{\mathbb {R}}}\sin ^2\theta (x,0)\, \hbox {d}x+2\gamma \int _{{\mathbb {R}}}(\sin ^2{\bar{\theta }}(x)- \sin ^2\theta (x,0))\, \hbox {d}x\\&\le 4\gamma \int _{{\mathbb {R}}}\sin ^2\theta (x,0)\, \hbox {d}x+ 4\gamma \int _{{\mathbb {R}}}|\sin {\bar{\theta }}(x)- \sin \theta (x,0)|^2\, \hbox {d}x\\&\le 4\gamma \int _{{\mathbb {R}}}\sin ^2\theta (x,0)\, \hbox {d}x+ 4\gamma \int _{{\mathbb {R}}}|{\bar{\theta }}(x)- \theta (x,0)|^2\, \hbox {d}x\\&\le 4\gamma \int _{{\mathbb {R}}}\sin ^2\theta (x,0)\, \hbox {d}x+ 4\gamma \int _{{\mathbb {R}}}\int _0^1|\partial _y \theta (x, y)|^2\, \hbox {d}y\, \hbox {d}x\\&\le (4+8\gamma ) F(\theta )\,. \end{aligned}$$

Equation (3.3) then follows.

Note that for every \(\alpha <\beta \) we have

$$\begin{aligned} \frac{1}{2}\int _{\alpha }^\beta |{\bar{\theta }}'|^2\, \hbox {d}x+2\gamma \int _{\alpha }^\beta \sin ^2{\bar{\theta }}\, \hbox {d}x\ge & {} 2\sqrt{\gamma }\int _{\alpha }^\beta |\sin {\bar{\theta }}||{\bar{\theta }}'|\, \hbox {d}x\nonumber \\\ge & {} 2\sqrt{\gamma }\big |\cos ({\bar{\theta }}(\beta ))-\cos ({\bar{\theta }}(\alpha ))\big |. \end{aligned}$$
(3.4)

In particular, recalling (3.3), \(\cos {\bar{\theta }}\) satisfies the Cauchy condition for \(x\rightarrow +\infty \); that is,

$$\begin{aligned} \lim _{\alpha , \beta \rightarrow +\infty }\big |\cos ({\bar{\theta }}(\beta ))-\cos ({\bar{\theta }}(\alpha ))\big |=0, \end{aligned}$$

and thus \(\cos {\bar{\theta }}\), and in turn \(\sin ^2{\bar{\theta }}\), admit a limit as \(x\rightarrow +\infty \). Clearly the same is true for \(x\rightarrow -\infty \).

Recalling (3.3), we conclude that

$$\begin{aligned} \sin {\bar{\theta }}(x)\rightarrow 0 \text { and } \cos ^2{\bar{\theta }}(x)\rightarrow 1 \text { as } |x|\rightarrow +\infty . \end{aligned}$$
(3.5)

We now claim that there exist \(k_1, k_2\in {\mathbb {Z}}\) such that

$$\begin{aligned} \lim _{x\rightarrow -\infty }{\bar{\theta }}(x)=k_1\pi \text { and } \lim _{x\rightarrow +\infty }{\bar{\theta }}(x)=k_2\pi . \end{aligned}$$
(3.6)

Let us show only the second limit. We argue by contradiction assuming that there exist two sequences \(x_n<x'_n\) both diverging to \(+\infty \) such that \(\liminf _{n \rightarrow \infty } |{\bar{\theta }}(x_n)- {\bar{\theta }}(x'_n)|\ge \pi \). But then, by the continuity of \({\bar{\theta }}\) it is clear that we may also find \(x''_n\in (x_n, x'_n)\) such that \( \cos ^2{\bar{\theta }}(x''_n)\rightarrow 0\), which contradicts (3.5). Thus, (3.6) holds.

Denote \(Q^t:=(t-\frac{1}{2},t+\frac{1}{2})\times (0,1)\) and note that \(\lim _{t\rightarrow \pm \infty }\Vert \nabla \theta \Vert _{L^2(Q^t)}=0\). In turn, by a Poincaré-type inequality we have

$$\begin{aligned} \Vert \theta - {\bar{\theta }}(t)\Vert ^2_{H^1(Q^t)}\le C \Vert \nabla \theta \Vert ^2_{L^2(Q^t)} \end{aligned}$$

and thus, taking into account (3.6) we conclude that

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert \theta -k_2\pi \Vert ^2_{H^1(Q^t)}=0 \text { and } \lim _{t\rightarrow -\infty }\Vert \theta -k_1\pi \Vert ^2_{H^1(Q^t)}=0. \end{aligned}$$

By an application of the Trace Theorem we obtain (3.2). If \(h>0\), then the fact that

$$\begin{aligned} \int _{\Sigma }(1-\cos \theta )\, \hbox {d}x<+\infty \end{aligned}$$

implies that \(k_1\), \(k_2\in 2{\mathbb {Z}}\). \(\square \)

Note that given \(m\in H^1_{l}(\Sigma ; {\mathbb {S}}^1)\), the corresponding phase function \(\theta \) is determined up to an additive constant of the form \(k\pi \), where \(k\in {\mathbb {Z}}\) if \(h=0\) or \(k\in 2{\mathbb {Z}}\) if \(h>0\). In view of Lemma 3.1 we may additionally require that

$$\begin{aligned} \lim _{x\rightarrow +\infty }\Vert \theta (x,\cdot )\Vert _{L^2(0,1)}=0. \end{aligned}$$
(3.7)

Clearly by enforcing such a condition the phase function \(\theta \) is uniquely determined.

In the next two subsections we address the existence of minimizers and the classification of domain wall solutions in the sense of Definition 2.4, respectively.

3.1 Existence of minimizers

We prove the following existence result:

Theorem 3.2

If \(h=0\) then the minimization problem (2.5) admits a solution for \(k=1\). If \(h>0\) then the minimization problem (2.5) admits a solution for \(k=2\). In both cases, a solution \(\theta _{min}\) can be found satisfying \(\displaystyle \int _0^1\theta _{min}(0,y)\,\textrm{d}y=\frac{k\pi }{2}\). Moreover, \(\theta _{min}\in C^{\infty }({\overline{\Sigma }})\), with derivatives of all order bounded, and \(\partial _x \theta _{min} <0\) in \(\overline{\Sigma }\).

Proof

We provide the proof only in the case \(h>0\), as the case \(h = 0\) can be treated analogously and is simpler. To this end, for \(M>0\), let

$$\begin{aligned} \begin{aligned} {\mathcal {A}}_{2,M}&:=\big \{\theta \in {\mathcal {A}}_2:\, \theta =0 \text{ in } \{(x,y)\in \Sigma :\, x\ge M\} \text{ and } \\ {}&\qquad \qquad \qquad \theta =2\pi \text{ in } \{(x,y)\in \Sigma :\, x\le -M\}\big \}, \end{aligned} \end{aligned}$$

and note that by standard arguments there exists a minimizer \(\theta _M\) of F over \({\mathcal {A}}_{2,M}\). Throughout the proof for every \(M>0\) we set \({\mathcal {R}}_M:=(-M, M)\times (0,1)\).

We claim that

$$\begin{aligned} \theta _M(x, y) \in (0,2\pi )\quad \text { for all } (x, y) \in {\mathcal {R}}_M. \end{aligned}$$
(3.8)

This follows by first observing that by an easy truncation procedure we may conclude that \(\theta _M\) satisfies

$$\begin{aligned} 0\le \theta _M\le 2\pi . \end{aligned}$$
(3.9)

Moreover, by a standard first variation argument \(\theta _M\) is a weak solution to the following Euler–Lagrange problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta _M=h\sin \theta _M &{} \text {in } {\mathcal {R}}_M,\\ \partial _{\nu }\theta _M=-\gamma \sin (2\theta _M) &{} \text {on } \partial \mathcal R_M\cap \partial \Sigma ,\\ \theta _M=0 &{}\text {on } \{M\}\times (0,1),\\ \theta _M=2\pi &{}\text {on } \{-M\}\times (0,1), \end{array}\right. } \end{aligned}$$
(3.10)

that is,

$$\begin{aligned} \int _{ {\mathcal {R}}_M}(\nabla \theta _M\cdot \nabla \varphi + h\sin (\theta _M)\, \varphi )\, \hbox {d}^2 r+\gamma \int _{ \partial {\mathcal {R}}_M\cap \partial \Sigma }\sin (2\theta _M)\varphi \, \hbox {d}{\mathcal {H}}^1=0 \nonumber \\ \end{aligned}$$
(3.11)

for all \(\varphi \in H^1({\mathcal {R}}_M)\) s.t. \(\varphi =0\) on \(\{-M, M\}\times (0,1)\).

Consider now the reflected function \({\widetilde{\theta }}_M\) defined on \({\mathcal {R}}_{3M}\) by

$$\begin{aligned} {\widetilde{\theta }}_M(x,y):= {\left\{ \begin{array}{ll} -\theta _M(-x-2M,y)+4\pi &{} \text {if }x{\in (-3M, -M)},\\ \theta _M(x,y) &{} \text {if } x\in (-M,M),\\ -\theta _M(-x+2M,y) &{} \text {if }x{ \in (M, 3 M)} . \end{array}\right. } \end{aligned}$$

Using the weak formulation (3.11), one can immediately check that \({\widetilde{\theta }}_M\) is in turn a weak solution; that is,

$$\begin{aligned} \int _{ {\mathcal {R}}_{3M}}(\nabla {\widetilde{\theta }}_M\cdot \nabla \varphi + h\sin ({\widetilde{\theta }}_M)\, \varphi )\, \hbox {d}^2 r+\gamma \int _{ \partial {\mathcal {R}}_{3M}\cap \partial \Sigma }\sin (2{\widetilde{\theta }}_M)\varphi \, \hbox {d}{\mathcal {H}}^1=0 \end{aligned}$$

for all \(\varphi \in H^1({\mathcal {R}}_{3M})\) s.t. \(\varphi =0\) on \(\{-3M, 3M\}\times (0,1)\). We may then apply the very same arguments of Lemma 3.4(a) below (clearly, we can, since the regularity argument is local) to conclude that for every \(0<M'<3\,M\), \({\widetilde{\theta }}_M\in C^{\infty }(\overline{\mathcal R_{M'}})\). In particular, \(\theta _M\in C^{\infty }(\overline{\mathcal R_{M}})\), and (3.10) holds classically.

Note that we can write

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta _M= c(x,y)\theta _M &{} \text {in }(-M,M)\times (0,1),\\ \partial _\nu \theta _M=-\gamma \sin (2\theta _M) &{}\text {on }(-M, M)\times \{0,1\}, \end{array}\right. } \end{aligned}$$

where we set

$$\begin{aligned} c(x,y):= {\left\{ \begin{array}{ll} h\frac{\sin \theta _M(x,y)}{\theta _M(x,y)} &{} \text {if } \theta _M(x,y)>0, \\ h &{} \text {if }\theta _M(x,y)=0. \end{array}\right. } \end{aligned}$$

In order to prove (3.8), recall (3.9) and assume by contradiction that \(\theta _M=0\) at some point in \((-M,M)\times (0,1)\). But then the Strong Maximum Principle [59, Theorem 2.2] applies and yields that \(\theta _M\equiv 0\) in \((-M,M)\times (0,1)\), a contradiction to the fact that \(\theta _M\in {\mathcal {A}}_{2,M}\). If instead \(\theta _M=0\) at some point of the horizontal boundary \( (-M, M)\times \{0,1\}\), then thanks to the Neumann condition in (3.10) also \(\partial _\nu \theta _M\) vanishes at the same point and thus the contradiction follows from Hopf’s Lemma [24, Lemma 3.4]. Hence, we have shown that \( \theta _M>0\) in \( (-M, M)\times [0,1]\). Replacing \(\theta _M\) by \(2\pi -\theta _M\) and arguing as before, we complete the proof of (3.8).

We now show that \(\theta _M\) is monotone non-increasing in the x-direction. To this aim, we adapt the classical sliding method of Berestycki and Nirenberg [6] (see also [7]) to the problem on the strip with nonlinear boundary conditions. Set

$$\begin{aligned} {\bar{\lambda }}:=\inf \{\lambda >0:\, \theta _M(\cdot +\mu , \cdot )\le \theta _M\text { in } \Sigma \text { for all }\mu \ge \lambda \}, \end{aligned}$$

and observe that necessarily \({\bar{\lambda }}\in [0, 2M)\). Indeed, clearly \(\theta _M(\cdot +\mu , \cdot )\le \theta _M\) for all \(\mu \ge 2\,M\). Moreover, since \( \theta _M(\cdot +2\,M, \cdot )=0<2\pi =\theta _M\) on \(\{-M\}\times [0,1]\), by continuity we may find \(\varepsilon >0\) so small that \( \theta _M(\cdot +2M-s, \cdot )<\theta _M\) on \([-M, -M+s]\times [0,1]\) for all \(s\in (0, \varepsilon ]\), which in turn easily implies \( \theta _M(\cdot +2\,M-s, \cdot )\le \theta _M\) for the same s. Thus \({\bar{\lambda }}\le 2M-\varepsilon \).

Note that \({\bar{\lambda }}=0\) if and only if \(\theta _M\) is monotone non-increasing in the x-direction. Assume by contradiction that \({\bar{\lambda }}>0\). This means that \(\theta _M(\cdot +{\bar{\lambda }}, \cdot )\le \theta _M\) and we claim that there exists \(({\bar{x}}, {\bar{y}})\in [-M, M-{\bar{\lambda }}]\times [0,1]\) such that \(\theta _M({\bar{x}}+\bar{\lambda }, {\bar{y}})= \theta _M({\bar{x}}, {\bar{y}})\). Indeed, if not then we would have \(\theta _M(\cdot +{\bar{\lambda }}, \cdot )< \theta _M\) in \([-M, M-{\bar{\lambda }}]\times [0,1]\) and in turn, arguing as above, \(\theta _M(\cdot +{\bar{\lambda }}-\varepsilon , \cdot )\le \theta _M\) in \([-M, M-{\bar{\lambda }}+\varepsilon ]\times [0,1]\) for all \(\varepsilon \) small enough, contradicting the minimality of \({\bar{\lambda }}\). We claim now that \({\bar{x}}\in ({-M}, M-{\bar{\lambda }})\). Indeed, if \({\bar{x}}=-M\), then \(\theta _M({\bar{x}}+{\bar{\lambda }}, {\bar{y}})= \theta _M({\bar{x}}, {\bar{y}})=2\pi \) which is impossible thanks to (3.8) since \({\bar{x}}+{\bar{\lambda }}\in (-M, M)\). If instead \({\bar{x}}=M-{\bar{\lambda }}\), then \( \theta _M({\bar{x}}, {\bar{y}})=\theta _M(M, {\bar{y}})=0\), which is again impossible by (3.8) since \({\bar{x}}<M\).

We now set \(u:=\theta _M-\theta _M(\cdot +{\bar{\lambda }}, \cdot )\). Note that u satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta u={\tilde{c}} u &{} \text {in }(-M, M-{\bar{\lambda }})\times (0,1),\\ \partial _\nu u=-\gamma (\sin (2\theta _M)-\sin (2\theta _M(\cdot +{\bar{\lambda }}, \cdot ))) &{} \text {on } (-M, M-{\bar{\lambda }})\times \{0,1\},\quad \quad \quad \\ u\ge 0,\\ u({\bar{x}}, {\bar{y}})=0, \quad \quad \quad \end{array}\right. } \end{aligned}$$
(3.12)

where

$$\begin{aligned} {\tilde{c}}:= {\left\{ \begin{array}{ll} h\frac{\sin (\theta _M)-\sin (\theta _M(\cdot +{\bar{\lambda }}, \cdot ))}{\theta _M-\theta _M(\cdot +{\bar{\lambda }}, \cdot )} &{} \text {in }\{\theta _M>\theta _M(\cdot +{\bar{\lambda }}, \cdot )\},\\ h &{} \text {in }\{\theta _M=\theta _M(\cdot +{\bar{\lambda }}, \cdot )\}. \end{array}\right. } \end{aligned}$$

Now if \({\bar{y}}\in (0,1)\), then we can invoke again the Strong Maximum Principle [59, Theorem 2.2] to conclude that \(u\equiv 0\) in \([-M, M-{\bar{\lambda }}]\times [0,1]\), and in particular that \(\theta _M(M-{\bar{\lambda }}, y)=\theta _M(M, y)=0\), which is a contradiction to (3.8). If instead \({\bar{y}}\in \{0,1\}\), then by Hopf’s Lemma [24, Lemma 3.4] we have \(\partial _\nu u({\bar{x}}, {\bar{y}})\ne 0\), which contradicts the boundary condition in (3.12). This concludes the proof of the fact that \({\bar{\lambda }}=0\) and thus that \(\theta _M\) is monotone non-increasing in the x-direction.

We now set \({\bar{\theta }}_M(x):=\int _0^1\theta _M(x,y)\, \hbox {d}y\). Note that \({\bar{\theta }}_M\) is continuous on \({\mathbb {R}}\) and that \({\bar{\theta }}_M(x)=0\) for \(x\ge M\) and \({\bar{\theta }}_M(x)=2\pi \) for \(x\le -M\). Thus, we may find \(x_M\) such that \({\bar{\theta }}_M(x_M)= \pi \). We set \(\tilde{\theta }_M:=\theta _M(\cdot +x_M, \cdot )\). Observing that \(F(\theta _M)\) is non-increasing in M, we easily see that \(\{{\tilde{\theta }}_M\}_{M\ge 1}\) is equibounded in \(H^1_{l}(\Sigma )\). Thus, we may find a sequence \(M_n\rightarrow +\infty \) and a function \( \theta _\infty \in H^1_{l}(\Sigma )\) such that \(\tilde{\theta }_{M_n}\rightharpoonup \theta _\infty \) weakly in \(H^1_{l}(\Sigma )\), and

$$\begin{aligned} F(\theta _\infty )\le \liminf _{n}F({\tilde{\theta }}_{M_n})<+\infty . \end{aligned}$$
(3.13)

Moreover, \(0\le \theta _\infty \le 2\pi \), \(\theta _\infty \) is monotone non-increasing in the x-direction, satisfies

$$\begin{aligned} \int _0^1\theta _\infty (0,y)\, \hbox {d}y=\pi \end{aligned}$$
(3.14)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta _\infty =h\sin \theta _\infty &{} \text {in }\Sigma ,\\ \partial _\nu \theta _\infty =-\gamma \sin (2\theta _\infty )&{} \text {on }\partial \Sigma \,, \end{array}\right. } \end{aligned}$$
(3.15)

in the weak sense. Again by Lemma 3.4, \(\theta _\infty \in C^\infty ({\overline{\Sigma }})\), with derivatives of all orders bounded, and thus it satisfies (3.15) classically.

We claim that \(\theta _\infty \in {\mathcal {A}}_2\). To this aim, in view of (3.13) and Lemma 3.1, and recalling that \(0\le \theta _\infty \le 2\pi \), we have

$$\begin{aligned} \lim _{x\rightarrow -\infty }\Vert \theta _\infty (x,\cdot )-k_1\pi \Vert _{L^{2}(0,1)}=0 \text { and } \lim _{x\rightarrow +\infty }\Vert \theta _\infty (x,\cdot )-k_2\pi \Vert _{L^{2}(0,1)}=0, \end{aligned}$$

with \(k_1, k_2\in \{0,2\}\). Now, by monotonicity and (3.14) we infer that necessarily \(k_1=2\) and \(k_2=0\). This shows that \(\theta _\infty \in {\mathcal {A}}_2\).

In order to conclude that \(\theta _\infty \) is a minimizer, in view of (3.13) it remains to show that

$$\begin{aligned} \liminf _{n \rightarrow \infty }F({\tilde{\theta }}_{M_n})=\inf _{{\mathcal {A}}_2}F. \end{aligned}$$
(3.16)

To this aim, it is clearly enough to show that

$$\begin{aligned}{} & {} \text {for } \theta \in {\mathcal {A}}_2 \text { with } F(\theta )<+\infty \text { and } \varepsilon>0 \text { there exists } M>0 \text { and } {\tilde{\theta }}\in {\mathcal {A}}_{2,M} \nonumber \\{} & {} \quad \text { such that } F({\tilde{\theta }})\le F(\theta )+\varepsilon . \end{aligned}$$
(3.17)

In order to show this, we select two sequences, \(x_n^+\rightarrow +\infty \) and \(x_n^-\rightarrow -\infty \), such that

$$\begin{aligned} \theta (x_n^+, \cdot )\rightarrow 0 \quad \text {and}\quad \theta (x_n^-, \cdot )\rightarrow 2\pi \qquad \text {uniformly in }[0,1], \end{aligned}$$
(3.18)

and

$$\begin{aligned} \limsup _{n \rightarrow \infty } \Vert \theta (x_n^\pm , \cdot )\Vert _{H^1(0,1)}<+\infty . \end{aligned}$$
(3.19)

This is possible by a simple slicing argument thanks to the fact that \(|\nabla \theta |\in L^2(\Sigma )\). At this point, for every \(n\in {\mathbb {N}}\) we define

$$\begin{aligned} \theta _n(x,y):= {\left\{ \begin{array}{ll} \theta (x,y) &{} \text {if }x\in (x_n^-, x_n^+),\\ \theta (x^+_n,y)\left[ \left( 1-\frac{x-x_n^+}{\Vert \theta (x_n^+, \cdot )\Vert _\infty }\right) \vee 0\right] &{} \text {if }x\ge x^+_n,\\ 2\pi -(2\pi -\theta (x_n^-, y))\left[ \left( 1-\frac{x_n^--x}{\Vert 2\pi -\theta (x_n^-, \cdot )\Vert _\infty }\right) \vee 0\right] &{} \text {if }x\le x^-_n, \end{array}\right. } \end{aligned}$$

with the understanding that \(\theta _n\equiv 0\) for \(x\ge x_n\) if \(\Vert \theta (x_n^+, \cdot )\Vert _\infty =0\), and \(\theta _n\equiv 2\pi \) for \(x\le x_n\) if \(\Vert 2\pi -\theta (x_n^-, \cdot )\Vert _\infty =0\). Clearly each \(\theta _n\) belongs to \({\mathcal {A}}_{2,M_n}\) for some \(M_n>0\) sufficiently large. Moreover, using (3.18) and (3.19), it is easy to check that \(F(\theta _n)-F(\theta )\rightarrow 0\) as \(n\rightarrow \infty \), thus establishing (3.17) and finishing the proof of existence.

We are left with showing that \(\partial _x \theta _\infty <0\) in \(\overline{\Sigma }\). We already know that \(\theta _\infty \) is a smooth function, with \(\partial _x \theta _\infty \le 0\) everywhere. Differentiating (3.15) with respect to x we obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta (\partial _x \theta _\infty )=h\cos \theta _\infty \partial _x \theta _\infty &{} \text {in }\Sigma ,\\ \partial _\nu (\partial _x \theta _\infty )=-2\gamma \cos (2\theta _\infty ) \partial _x \theta _\infty &{} \text {on }\partial \Sigma . \end{array}\right. } \end{aligned}$$
(3.20)

Assume \(\partial _x \theta _\infty ({\bar{x}}, {\bar{y}}) =0\) at some point \(({\bar{x}}, {\bar{y}}) \in {\overline{\Sigma }}\). If \(({\bar{x}}, {\bar{y}}) \in \Sigma \), then using the Strong Maximum Principle [59, Theorem 2.2] we obtain a contradiction. If instead \(({\bar{x}}, {\bar{y}}) \in \partial \Sigma \), then also \(\partial _\nu (\partial _x\theta _\infty )\) vanishes at the same point and thus the contradiction follows from Hopf’s Lemma [24, Lemma 3.4]. \(\square \)

Corollary 3.3

If \(h>0\) then for every \(k \in 2{\mathbb {N}}\) we have

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_k} F(\theta ) = \frac{k}{2} \min _{\theta \in {\mathcal {A}}_2} F(\theta ). \end{aligned}$$

If \(h=0\) then for every \(k \in {\mathbb {N}}\) we have

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_k} F(\theta ) = k \min _{\theta \in {\mathcal {A}}_1} F(\theta ). \end{aligned}$$

Proof

We provide the proof only for the case \(h>0\), the other one being analogous. As in the previous proof, we fix \(M>0\) and let

$$\begin{aligned} {\mathcal {A}}_{k,M}:=\{\theta \in {\mathcal {A}}_k:\, \theta (x,y)=0 \text { if }x\ge M\text { and } \theta (x,y)=k\pi \text { if }x\le -M\}. \end{aligned}$$

It is clear that there exists a minimizer

$$\begin{aligned} {\theta _M = \text {argmin}}_{\theta \in {\mathcal {A}}_{k,M}}F(\theta ). \end{aligned}$$
(3.21)

By the same arguments and with the same notation used in the proof of Theorem 3.2 we obtain

  1. (i)

    \( \theta _M\in (0,k\pi )\quad \text { in } {\mathcal {R}}_M\,\);

  2. (ii)

    \(\theta _M\in C^\infty (\overline{{\mathcal {R}}_M})\);

  3. (iii)

    \(\theta _M\) has negative derivative in x-direction everywhere in \((-M,M)\times [0,1]\).

Arguing as in the proof of Theorem 3.2, we can show that

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_{k}}F(\theta ) = \lim _{j \rightarrow \infty } F(\theta _{M_j}), \end{aligned}$$
(3.22)

where \(\theta _{M_j} \in {\mathcal {A}}_{k,M_j}\) is a minimizer of the corresponding problem (3.21) and \(\{M_j\}\) is any sequence of positive numbers such that \(M_j \rightarrow \infty \).

Now observe that by the properties stated above, for every \(j\in {\mathbb {N}}\) we may find smooth functions \(g^j_i\in C^{\infty }([0,1])\), \(i=1,\dots , k/2-1\), such that

$$\begin{aligned}{} & {} M_j>g^j_1>g^j_2>\cdots>g^j_{k/2-1}>-M_j \qquad \text {and}\\{} & {} \theta _{M_j}(g^j_i(y), y)=2 \pi i \quad \text {for all }y\in [0,1]. \end{aligned}$$

Setting also \(g^j_0:=M_j\), \(g^{j}_{k/2}:=-M_j\) and \(\Sigma ^j_i:=\{(x,y):\, g^j_{i-1}(y)>x>g^j_{i}(y)\}\), we clearly have

$$\begin{aligned} \begin{aligned} F(\theta _{M_j})&=\sum _{i=1}^{k/2} \left( \int _{\Sigma ^j_i}\left( \frac{1}{2}|\nabla \theta _{M_j}|^2+h(1-\cos \theta )\right) \, \hbox {d}^2 r\right. \\&\quad \left. +\gamma \int _{\partial \Sigma \cap \partial \Sigma ^j_i}\sin ^2\theta _{M_j}\, \hbox {d}{\mathcal {H}}^1\right) = \sum _{i=1}^{k/2}F(\zeta ^j_{i}), \end{aligned} \end{aligned}$$
(3.23)

where we set

$$\begin{aligned} \zeta ^j_i(x,y):= {\left\{ \begin{array}{ll} 2(i-1)\pi &{} \text {if }x\ge g^j_{i-1}(y),\\ \theta _{M_j}(x,y) &{} \text {if } g^j_{i-1}(y)>x>g^j_{i}(y),\\ 2\pi i &{} \text {if } x\le g^j_i(y). \end{array}\right. } \end{aligned}$$

Note that \(\zeta ^j_i- 2(i-1)\pi \in {\mathcal {A}}_2\) and \(F(\zeta ^j_i- 2(i-1)\pi )=F(\zeta ^j_i)\), and thus \(F(\zeta ^j_i) \ge \min _{\theta \in {\mathcal {A}}_2} F(\theta )\). In turn, by combining (3.22) and (3.23), we deduce that

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_{k}}F(\theta ) = \lim _{j \rightarrow \infty } F(\theta _{M_j}) \ge \frac{k}{2} \min _{\theta \in {\mathcal {A}}_2} F(\theta ). \end{aligned}$$

In order to obtain the reverse inequality, we start from the minimizer \(\theta _{2,M_j}\) of the problem (3.21), with \(k=2\) and \(M=M_j\), and define the function \(\xi _j \in {\mathcal {A}}_{k}\) as

$$\begin{aligned} \xi _j(x,y): =\sum _{i=0}^{k/2-1} \theta _{2,M_j}(x+2iM_j,y), \end{aligned}$$

so that \(F(\xi _j) = \frac{k}{2} F(\theta _{2,M_j})\). Then, we have

$$\begin{aligned} \inf _{\theta \in {\mathcal {A}}_{k}}F(\theta ) \le \lim _{j \rightarrow \infty } F(\xi _j) = \frac{k}{2} \lim _{j \rightarrow \infty }F(\theta _{2,M_j}) = \frac{k}{2} \min _{\theta \in {\mathcal {A}}_2} F(\theta ), \end{aligned}$$

where the last equality follows from the proof of Theorem 3.2. \(\square \)

3.2 Uniqueness of minimizers and classification of critical points

Next we address uniqueness of minimizers for the problem (2.5). In fact, we will classify all the critical points subject to constant boundary conditions at infinity; i.e., domain wall solutions to the boundary reaction-diffusion type problem of the form in (2.2) satisfying (2.6).

We start by showing that such critical points are smooth up to the boundary, with uniform estimates at infinity. To this aim, given \(t\in {\mathbb {R}}\) we denote

$$\begin{aligned} \Sigma _t^\pm :=\{(x,y)\in \Sigma :\, x > rless t\}, \end{aligned}$$
(3.24)

and we recall that given an open set \(\Omega \subset {\mathbb {R}}^2\) with Lipschitz boundary the trace space \(H^{1/2}(\partial \Omega )\) of \(H^1(\Omega )\) may be equipped with the norm \(\Vert w\Vert ^2_{H^{1/2}(\partial \Omega )}:=\Vert w\Vert ^2_{L^2(\partial \Omega )}+ [w]^2_{\mathring{H}^{1/2}(\partial \Omega )}\), where \([w]^2_{\mathring{H}^{1/2}(\partial \Omega )}\) stands for the squared Gagliardo seminorm

$$\begin{aligned} {[}w]^2_{\mathring{H}^{1/2}(\partial \Omega )}:=\int _{\partial \Omega } \int _{\partial \Omega }\frac{|w({\textbf{r}})-w(\mathbf {r'})|^2}{|{\textbf{r}}- \mathbf {r'}|^2}\, \hbox {d}{\mathcal {H}}^1({{\textbf{r}}})\hbox {d}{\mathcal {H}}^1({{\textbf{r}}'}). \end{aligned}$$
(3.25)

Moreover, with a slight abuse of notation, for any subset \(\Gamma \subset \partial \Omega \) (and for \(w\in H^{1/2}(\partial \Omega )\)) we will denote \(\Vert w\Vert ^2_{H^{1/2}(\Gamma )}:= \Vert w\Vert ^2_{L^2(\Gamma )}+[w]^2_{\mathring{H}^{1/2}(\Gamma )}\), where \([w]^2_{\mathring{H}^{1/2}(\Gamma )}\) is defined as in (3.25), with \(\partial \Omega \) replaced by \(\Gamma \).

Lemma 3.4

Let \(\theta \in H^1_{l}(\Sigma ) \cap L^\infty (\Sigma )\) be a solution of (2.3). Then, up to choosing a representative, the following statements hold true:

  1. (a)

    \(\theta \in C^\infty ({\overline{\Sigma }})\), and for every \(k\in {\mathbb {N}}\) there exists a constant \(C_k = {C_k(\gamma , h, \Vert \theta \Vert _\infty )} >0\) such that

    $$\begin{aligned} \Vert \theta \Vert _{C^k({\overline{\Sigma }})}\le C_k; \end{aligned}$$
    (3.26)
  2. (b)

    if in addition \(\theta \) satisfies (2.6), then the convergence at infinity is uniform with respect to the \(C^k\)-norm for any \(k\in {\mathbb {N}}\), i.e.,

    $$\begin{aligned} \lim _{t\rightarrow -\infty }\Vert \theta -\ell ^-\Vert _{C^k({\overline{\Sigma }}^-_t)}=0 \quad \text {and}\quad \lim _{t\rightarrow +\infty }\Vert \theta -\ell ^+\Vert _{C^k({\overline{\Sigma }}^+_t)}=0. \end{aligned}$$
    (3.27)

    Moreover, if \(h=0\), then \(\ell ^-, \ell ^+\in \frac{\pi }{2}{\mathbb {Z}}\), while if \(h>0\), then \(\ell ^-, \ell ^+\in \pi {\mathbb {Z}}\).

Proof

In what follows, for all \(t\in {\mathbb {R}}\) and \(R>0\) we set \(Q^t_R:=(t-R, t+R)\times (0,1)\); moreover, C will denote a positive constant depending only on R that may change from line to line.

We first observe that by a standard Caccioppoli Inequality type argument, that is, testing (2.3) with \(\varphi =\eta ^2 \theta \), where \(\eta \in C^\infty \) is with compact support in \({\overline{\Sigma }}\), we may infer from the boundedness of \(\theta \) that \(\nabla \theta \) is uniformly locally bounded with respect to the \(L^2\)-norm. More precisely, for every \(R>0\) there exists \(C_1=C_1(\gamma , h, \Vert \theta \Vert _\infty , R)>0\) such that \(\sup _{t\in {\mathbb {R}}}\Vert \theta \Vert _{H^1(Q^t_R)}\le C_1\). In turn, by the Trace Theorem, see for instance [56, Theorem 5.5], we have \(\Vert \theta \Vert _{H^{1/2}(\partial Q^t_R\cap \partial \Sigma )}\le \Vert \theta \Vert _{H^{1/2}(\partial Q^t_R)}\le C\Vert \theta \Vert _{H^1(Q^t_R)}\le CC_1\) and, in turn, using the definition (3.25) of the Gagliardo seminorm we may check that \(\Vert \sin (2\theta )\Vert _{H^{1/2}(\partial Q^t_R\cap \partial \Sigma )}\le C\Vert \theta \Vert _{H^{1/2}(\partial Q^t_R\cap \partial \Sigma )}\le CC_1\). Thus,

$$\begin{aligned} \sup _{t\in {\mathbb {R}}}\Vert \gamma \sin (2\theta )\Vert _{H^{1/2}(\partial Q^t_R\cap \partial \Sigma )}\le \gamma C C_1. \end{aligned}$$
(3.28)

Fix \(t\in {\mathbb {R}}\) and a cut-off function \(\zeta \in C^\infty _c(-3R, 3R)\), \(0\le \zeta \le 1\), and \(\zeta \equiv 1\) in \([-2R, 2R]\). Let \(\Omega ^0\subset {\mathbb {R}}^2\) be a bounded domain with boundary of class \(C^\infty \) such that \(Q^0_{3R}\subset \Omega ^{0} {\subset \Sigma }\), and let \(\Omega ^t:=\{(x,y):\, (x-t, y)\in \Omega ^0\}\). Finally, denote by g the function defined for \({\mathcal {H}}^1\)-a.e. \((x,y)\in \partial \Omega ^t\) by

$$\begin{aligned} g(x,y):= {\left\{ \begin{array}{ll} -\gamma \, \zeta (x-t)\sin (2\theta (x,y)) &{} \text {if }(x,y)\in \partial \Omega ^t\cap \partial \Sigma ,\\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Using again (3.25), one can check that \(g\in H^{1/2}(\partial \Omega ^t)\), with

$$\begin{aligned} \Vert g\Vert _{H^{1/2}(\partial \Omega ^t)} \le \gamma C \Vert \sin (2\theta )\Vert _{H^{1/2}(\partial Q^t_{2R}\cap \partial \Sigma )}, \end{aligned}$$
(3.29)

where \(C>0\) depends only on \(\zeta \) and \(\Omega ^0\) and thus, ultimately, only on R. In turn, by [26, Theorem 1.5.1.2] there exists a lifting function \({\tilde{g}}\in H^2(\Omega ^t)\) such that \(\partial _\nu {\tilde{g}}=g\) on \(\partial \Omega ^t \) and

$$\begin{aligned} \Vert {\tilde{g}}\Vert _{H^2 (\Omega ^t)}\le C\Vert g\Vert _{H^{1/2}(\partial \Omega ^t)} \le \gamma C' \Vert \sin (2\theta )\Vert _{H^{1/2}(\partial Q^t_{2R}\cap \partial \Sigma )}, \end{aligned}$$
(3.30)

where we used (3.29) (and, again, the constants \(C, C'\) depend only on R). Since \(\partial _\nu \tilde{g}=g=-\gamma \sin (2\theta )\) on \( \partial Q^t_{2R}\cap \partial \Sigma \), integration by parts yields

$$\begin{aligned}{} & {} \int _{\Sigma }(\nabla {\tilde{g}}\cdot \nabla \varphi + \Delta \tilde{g}\, \varphi )\, \hbox {d}^2 r+\gamma \int _{\partial \Sigma }\sin (2\theta )\varphi \, d{\mathcal {H}}^1=0 \\{} & {} \quad \forall \varphi \in H^1_l({\Sigma }) \text { with supp }\,\varphi \subset \overline{Q^t_{2R}}. \end{aligned}$$

Subtracting the above identity from (2.3) and setting \(w:=\theta -{\tilde{g}}\), we get

$$\begin{aligned} \int _{\Sigma }\big (\nabla w\cdot \nabla \varphi + (h\sin \theta -\Delta {\tilde{g}})\, \varphi \big )\, \hbox {d}^2 r=0 \qquad \forall \varphi \in H^1_l({\Sigma }) \text { with supp }\,\varphi \subset \overline{Q^t_{2R}}, \end{aligned}$$

that is w is a weak solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta w=h\sin \theta -\Delta {\tilde{g}} &{} \text {in } Q^t_{2R},\\ \partial _\nu w= 0&{} \text {on } \partial Q^t_{2R}\cap \partial \Sigma . \end{array}\right. } \end{aligned}$$

Thus, by standard \(H^2\)-estimates (see for instance [24]) and taking into account (3.28) and (3.30), we get

$$\begin{aligned} \Vert \theta \Vert _{H^2(Q^t_R)}\le & {} \Vert w\Vert _{H^2(Q^t_R)}+ \Vert {\tilde{g}}\Vert _{H^2(Q^t_{2R})}\nonumber \\\le & {} C\big (\Vert h\sin \theta -\Delta {\tilde{g}}\Vert _{L^2 (Q^t_{2R})}+\Vert w\Vert _{H^1 (Q^t_{2R})}+ \gamma C C_1\big )\le C_2\,\quad \quad \quad \quad \end{aligned}$$
(3.31)

for a suitable positive constant \(C_2\) depending only on R, \(\Vert \theta \Vert _\infty \), \(\gamma \), and h.

We can now start a bootstrap argument in order to obtain uniform estimates also with respect to higher norms. Owing to (3.31) and to the fact that \(\Vert \sin (2\theta )\Vert _{H^2(Q^t_R)} \le M \), with \(M=M \big (\Vert \theta \Vert _{H^2(Q^t_R)}\big ) \) (and thus ultimately depending only R, \(\Vert \theta \Vert _\infty \), \(\gamma \), and h), by applying the Trace Theorem again we can improve (3.28) to obtain, for all \(t\in {\mathbb {R}}\),

$$\begin{aligned} \sup _{t\in {\mathbb {R}}}\Vert \sin (2\theta )\Vert _{H^{3/2}(\partial Q^t_R\cap \partial \Sigma )}\le \gamma C M. \end{aligned}$$

Now, arguing as above and relying again on [26, Theorem 1.5.1.2], we may find a “lifting” function \({\tilde{g}}\in H^3(\Omega ^t)\) such that \(\partial _\nu \tilde{g}=-\gamma \sin (2\theta )\) on \(\partial Q^t_{2R}\cap \partial \Sigma \) and

$$\begin{aligned} \Vert {\tilde{g}}\Vert _{H^3 (\Omega ^t)}\le \gamma C \Vert \sin (2\theta )\Vert _{H^{3/2}(\partial Q^t_{2R}\cap \partial \Sigma )}. \end{aligned}$$

Thus, defining w as before and arguing similarly, we clearly may improve estimate (3.31) to obtain, for every \(R>0\),

$$\begin{aligned} \sup _{t\in {\mathbb {R}}} \Vert \theta \Vert _{H^3(Q^t_R)}\le C_3, \end{aligned}$$

for a suitable positive constant \(C_3\) depending only on R, \(\Vert \theta \Vert _\infty \), \(\gamma \), and h. We can now iterate this argument to show that for every \(k\in {\mathbb {N}}\) there exists a positive constant \(C_k\) depending only on R, \(\Vert \theta \Vert _\infty \), \(\gamma \), and h such that

$$\begin{aligned} \sup _{t\in {\mathbb {R}}} \Vert \theta \Vert _{H^k(Q^t_R)}\le C_k \end{aligned}$$
(3.32)

for all \(R>0\). In turn, (3.32) combined with the Sobolev Embedding Theorem yields (3.26).

The uniform bounds (3.26), together with the convergence condition in Definition 2.4 give (3.27). The latter in particular implies that both \(\Delta \theta \) and \(\partial _\nu \theta \) vanish at infinity. Thus, from (2.2) we deduce that \(\sin (2\ell ^\pm )=0\) and that also \(\sin (\ell ^\pm )=0\) when \(h>0\). The last part of statement b) readily follows. \(\square \)

In the next lemma we show that in the case \(h=0\), or \(h>0\) and \(F(\theta )<+\infty \), condition (2.7) is equivalent to (2.6).

Lemma 3.5

Let \(\theta \in C^2(\Sigma )\cap C^1({\overline{\Sigma }}) \cap L^\infty (\Sigma )\) be a solution of (2.2) such that (2.7) holds. Assume that either \(h=0\), or \(h>0\) and \(F(\theta )<+\infty \). Then also (2.6) holds true.

Proof

Consider first the case \(h=0\). Let \(\{\lambda _n\}\) be a sequence such that \(\lambda _n\rightarrow +\infty \) and set \(\theta _n:=\theta (\cdot +\lambda _n, \cdot )\). By statement a) of Lemma 3.4 we have that for every \(k\in {\mathbb {N}}\) the sequence \(\{\theta _n\}\) is uniformly bounded with respect to the \(C^k\)-norm on \({\overline{\Sigma }}\). Therefore, we may find a subsequence \(\{\theta _{n_k}\}\) and a bounded function \(\theta _\infty \) solving (2.2) such that \(\theta _{n_k}\rightarrow \theta _{\infty }\) in \(C^k\) on the compact subsets of \(\overline{\Sigma }\) for every \(k\in {\mathbb {N}}\). Moreover, in view of (2.7) we also have \(\theta _\infty =\ell ^+\) on \(\partial \Sigma \). In particular, \(\theta _\infty \) is a bounded harmonic function in \(\Sigma \), which is constant on \(\partial \Sigma \). It easily follows that \(\theta _\infty \equiv \ell ^+\). One way to see this is to extend the harmonic function \(\theta _\infty -\ell ^+\) to the whole plane by repeated odd reflections across the lines \(\{x=j\}\), \(j\in {\mathbb {Z}}\), thus getting an entire bounded harmonic function w, vanishing on such lines. Liouville’s Theorem implies that \(w\equiv 0\) in \({\mathbb {R}}^2\) and thus, in particular, \(\theta _\infty \equiv \ell ^+\) in \(\Sigma \). In turn, this implies that \(\theta (\lambda _{n_k}, y) \rightarrow \theta _\infty (0, y)=\ell ^+\) as \(k\rightarrow \infty \) for all \(y\in [0,1]\). By the arbitrariness of \(\{\lambda _n\}\) we have shown that the second condition in (2.6) is satisfied. A similar argument shows that also the first one holds true.

Assume now that \(h>0\) and \(F(\theta )<+\infty \) and note that the latter condition immediately implies that both \(\ell ^-, \ell ^+\in \pi {\mathbb {Z}}\). We may now run a similar argument as in the \(h=0\) case. Let \(\{\lambda _n\}\), \(\{\theta _n\}\) be as before and let \(\theta _\infty \) be the limit (up to a subsequence) of \(\theta _n\). One can show that in this case \(\theta _\infty \) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta _\infty =h\sin \theta _\infty &{} \text {in }\Sigma ,\\ \partial _\nu \theta _\infty =0&{} \text {on }\partial \Sigma ,\\ \theta _\infty =\ell ^+ &{} \text {on }\partial \Sigma . \end{array}\right. } \end{aligned}$$

Even reflections with respect to \(\partial \Sigma \) allow one to extend \(\theta _\infty \) to a function \({\tilde{\theta }}_\infty \) defined on the “tripled” stripe \({\tilde{\Sigma }}:={\mathbb {R}}\times (-1, 2)\) still solving the same equation

$$\begin{aligned} \Delta {\tilde{\theta }}_\infty =h\sin {\tilde{\theta }}_\infty \quad \text {in }{\tilde{\Sigma }}. \end{aligned}$$

By classical results, see for instance [51, Theorem 6.8.2], we infer that \({\tilde{\theta }}_\infty \) is analytic in \({\tilde{\Sigma }}\) and thus, in particular, \( \theta _\infty \) is analytic in \(\Sigma \) up to the boundary. But then, owing to the overdetermined boundary conditions on \(\partial \Sigma \), by the Cauchy–Kovalevskaya Theorem (see for instance [21]) it follows that \(\theta _\infty \equiv \ell ^+\) in a neighborhood of \(\partial \Sigma \) and thus, by analyticity, everywhere in \(\Sigma \). This establishes the second condition in (2.6) and the first one can be proven similarly. \(\square \)

We now start paving the way for the application of the sliding method to our situation. We recall that owing to Lemma 3.4, bounded weak solutions to (2.2) are in fact smooth classical solutions and thus, in what follows, we will not distinguish between weak and strong formulations. We begin with the following comparison principle for problem (2.2), where we will be using notation (3.24).

Lemma 3.6

Let \(t\in {\mathbb {R}}\) and let \(\theta _1, \theta _2\) be domain wall solutions to (2.2) according to Definition 2.4, with \(\theta _1\le \theta _2\) on \(\Gamma _t:=\{x=t\}\cap \Sigma \). Denote by \(\ell ^-_i\), \(\ell ^+_i\), \(i=1, 2\), the boundary conditions at infinity of \(\theta _i\) according to (2.6) and assume also that \(\ell ^+_1\le \ell ^+_2\). Assume also that there exists an interval \(J {= (\theta ^-, \theta ^+)}\) such that

$$\begin{aligned} \sup _{\Sigma _t^+}\theta _1<{\theta ^+} \text { and } \inf _{\Sigma _t^+}\theta _2>{\theta ^-} , \end{aligned}$$
(3.33)

and \(\theta \mapsto \sin (2\theta )\) is strictly increasing in J, together with \(\theta \mapsto \sin (\theta )\) if \(h>0\). Then, \(\theta _1\le \theta _2\) in \({\overline{\Sigma }}_t^+\). The same statement holds true with \(\ell _i^+\) and \(\Sigma _t^+\) replaced by \(\ell _i^-\) and \(\Sigma _t^-\), respectively.

Proof

We prove the statement only for \(\Sigma ^+_t\), the other case being analogous. For any fixed \(\varepsilon >0\) set \(\varphi _\varepsilon :=(\theta _1-\theta _2-\varepsilon )^+\chi _{\Sigma _t^+}\) and note that from the assumptions \(\theta _1\le \theta _2\) on \(\Gamma _t\) and \(\ell ^+_1\le \ell ^+_2\), taking into account part b) of Lemma 3.4, we conclude that the function \(\varphi _\varepsilon \) is in \(H^1(\Sigma )\) with bounded support contained in \({\overline{\Sigma }}^+_t\). Testing (2.3) for \(\theta _i\) with \(\varphi _\varepsilon \) and subtracting the two resulting equations we get

$$\begin{aligned}{} & {} \int _{\Sigma _t^+}|\nabla \varphi _\varepsilon |^2\, \hbox {d}^2 r+ h\int _{\{\theta _1-\theta _2>\varepsilon \}\cap \Sigma ^+_t}(\sin (\theta _1)-\sin (\theta _2))\varphi _\varepsilon \, \hbox {d}^2 r\\{} & {} \quad +\gamma \int _{\{\theta _1-\theta _2>\varepsilon \}\cap (\partial \Sigma ^+_t{\setminus }\Gamma _t)}(\sin (2\theta _1) -\sin (2\theta _2))\varphi _\varepsilon \, \hbox {d}{\mathcal {H}}^1=0. \end{aligned}$$

Note that \(\theta _1(\cdot )\), \(\theta _2(\cdot )\in J\) in \(\{\theta _1-\theta _2>\varepsilon \}\cap \Sigma ^+_t\), thanks to (3.33). Using now the monotonicity assumptions on \(\sin (2\theta )\) and \(\sin (\theta )\) for \(\theta \in J\), we may conclude from the above integral identity that \(\nabla \varphi _\varepsilon \equiv 0\) and that \(\theta _1-\theta _2\le \varepsilon \), or equivalently \(\varphi _\varepsilon =0\) on \(\partial \Sigma _t^+\). Thus, \(\varphi _\varepsilon \equiv 0\), that is, \(\theta _1-\theta _2\le \varepsilon \) in \({\overline{\Sigma }}_t^+\). The conclusion follows from the arbitrariness of \(\varepsilon \). \(\square \)

In the lemma below, we write down a version of the Strong Maximum Principle which works for (2.2). Note that a similar principle (and the argument behind) has been used already in the proof of Theorem 3.2.

Lemma 3.7

Let \(U\subset {\mathbb {R}}^2\) be a connected open set and let \(\theta _1\), \(\theta _2\in C^2(\Sigma ) \cap C^1({\overline{\Sigma }})\) be solutions of (2.2) such that \(\theta _1\le \theta _2\) in \(U\cap \Sigma \). Assume that \(\theta _1({\bar{x}},{\bar{y}})=\theta _2({\bar{x}},{\bar{y}})\) for some point \(({\bar{x}},{\bar{y}}) \in U\cap {\overline{\Sigma }}\). Then \(\theta _1=\theta _2\) in \(U\cap {\overline{\Sigma }}\).

Proof

We can argue similarly as in the proof of Theorem 3.2. Indeed, setting \(u:=\theta _2-\theta _1\), we note that u is smooth up to \(U\cap \partial \Sigma \) and satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta u={\tilde{c}} u &{} \text {in }U\cap \Sigma ,\\ \partial _\nu u=-\gamma (\sin (2\theta _2)-\sin (2\theta _1)) &{} \text {on } U\cap \partial \Sigma ,\\ u\ge 0 &{} \text {in }U\cap \Sigma ,\\ u({\bar{x}}, {\bar{y}})=0, \end{array}\right. } \end{aligned}$$
(3.34)

where now

$$\begin{aligned} {\tilde{c}}:= {\left\{ \begin{array}{ll} h\frac{\sin (\theta _2)-\sin (\theta _1)}{\theta _2-\theta _1} &{} \text {in }U\cap \{\theta _2>\theta _1\},\\ h &{} \text {in }U\cap \{\theta _2=\theta _1\}. \end{array}\right. } \end{aligned}$$

Notice that if \({\bar{y}}\in \{0,1\}\), then by Hopf’s Lemma [24, Lemma 3.4] we have \(\partial _\nu u({\bar{x}}, \bar{y})\ne 0\), which contradicts the Neumann boundary condition in (3.34). Thus, necessarily \({\bar{y}}\in (0,1)\). We may then invoque the Strong Maximum Principle [59, Theorem 2.2] to conclude that \(u\equiv 0\) and in turn \(\theta _2=\theta _1\) in \(U\cap \Sigma \). \(\square \)

We continue now with some elementary considerations, showing in particular that only some specific values are admissible for \(\ell _1\) and \(\ell _2\).

As a consequence of the Strong Maximum Principle and of the comparison Lemma 3.6 we have the following observation, which will be instrumental in the implementation of the sliding method.

Lemma 3.8

Let \(\theta _1, \theta _2\) be domain wall solutions to (2.2) according to Definition 2.4, and denote by \(\ell ^-_i\), \(\ell ^+_i\), \(i=1, 2\), the boundary conditions at infinity of \(\theta _i\) according to (2.6). Assume that \(\theta _1\le \theta _2\) in \(\Sigma \) and that \(\ell ^-_1>\ell ^+_2\). Assume also that there exist two open intervals \(J^+\), \(J^-\) where \(\theta \mapsto \sin (2\theta )\) is strictly increasing and so is \(\theta \mapsto \sin (\theta )\) if \(h>0\), and such that \(\ell ^\pm _2\in J^\pm \). Then, there exists \(\lambda \in {\mathbb {R}}\) such that \(\theta _1(\cdot +\lambda , \cdot )\equiv \theta _2\).

Proof

Let us first show that it is impossible to have \(\ell _2^+>\ell _1^+\) or \(\ell _2^->\ell _1^-\). To this aim we argue by contradiction.

Assume first that \(\ell _2^\pm >\ell _1^\pm \). Since also \(\ell ^-_1>\ell ^+_2\), there exists \(\lambda \in {\mathbb {R}}\) such that \(\theta _1(\cdot +\lambda , \cdot )\le \theta _2\) and \(\theta _1({\bar{x}} + \lambda , {\bar{y}})=\theta _2({\bar{x}} + \lambda , {\bar{y}})\) for some point \({({\bar{x}}, {\bar{y}})} \in {\overline{\Sigma }}\). Thus by Lemma 3.7 the two solutions coincide which contradicts our initial assumption \(\ell _2^\pm >\ell _1^\pm \).

Assume now that \(\ell _2^->\ell _1^-\) but \(\ell _2^+=\ell _1^+=:\ell ^+\). Owing to Lemma 3.4(b) and the fact that \(\ell ^+\in J^+\), we may choose \(t^+\) such that

$$\begin{aligned} \inf J^+< \inf _{\Sigma _{t^+}^+}\theta _2\le \sup _{\Sigma _{t^+}^+}\theta _2<\sup J^+. \end{aligned}$$
(3.35)

Set now

$$\begin{aligned} \lambda _0:=\inf \{\lambda \le 0:\, \theta _1(\cdot + \lambda , \cdot )\le \theta _{2}\}. \end{aligned}$$

Note that thanks to the assumption \(\ell _1^->\ell _2^+\) we have \(\lambda _0\in {\mathbb {R}}\). Moreover, clearly \(\theta _1(\cdot + \lambda _0, \cdot )\le \theta _{2}\) and thus, in particular, recalling (3.35), we have

$$\begin{aligned} \sup _{\Sigma _{ t^+}^+}\theta _1(\cdot + \lambda _0, \cdot )<\sup J^+. \end{aligned}$$
(3.36)

We claim that \(\theta _1(\cdot + \lambda _0, \cdot )\) and \(\theta _2\) coincide at some point in \({\overline{\Sigma }}\). Indeed if by contradiction \(\theta _1(\cdot + \lambda _0, \cdot ) < \theta _{2}\) everywhere, then, using also that \(\ell _2^->\ell _1^-\), we have \(\min _{{\overline{\Sigma }}^-_{t^+}}(\theta _2-\theta _1(\cdot + \lambda _0, \cdot ))>0\). By uniform continuity, recalling (3.36), we may find \(\varepsilon >0\) so small that

$$\begin{aligned} \min _{{\overline{\Sigma }}^-_{t^+}}(\theta _2-\theta _1(\cdot + \lambda _0-\varepsilon , \cdot ))>0 \quad \text {and}\quad \sup _{\Sigma _{t^+}^+}\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )<\sup J^+.\quad \quad \quad \end{aligned}$$
(3.37)

Recalling also (3.35), we are in a position to apply Lemma 3.6 to infer that \(\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )\le \theta _2\) in \(\Sigma _{t^+}^+\) and in turn, thanks to the first condition in (3.37), \(\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )\le \theta _2\) in \(\Sigma \). This contradicts the minimality of \(\lambda _0\). Therefore, \(\theta _1(\cdot + \lambda _0, \cdot )\) and \(\theta _2\) must coincide at some point in \({\overline{\Sigma }}\) and thus everywhere thanks to the Strong Maximum Principle. This again leads to a contradiction. The case where \(\ell _2^+>\ell _1^+\) but \(\ell _2^-=\ell _1^-\) is clearly analogous.

It remains to consider the case \(\ell ^\pm _1=\ell ^\pm _2\). In this case choose \(t^+\) as before. Arguing similarly as before and recalling that \(\ell ^-_2\in J^-\), we may also find \(t^-<t^+\) such that

$$\begin{aligned} \inf J^-< \inf _{\Sigma _{t^-}^-}\theta _2\le \sup _{\Sigma _{t^-}^-}\theta _2<\sup J^-. \end{aligned}$$
(3.38)

Let \(\lambda _0\) be as before. We are going to show that in this case \(\theta _1(\cdot + \lambda _0, \cdot )\) and \(\theta _2\) coincide at some point in \({\overline{\Sigma }}\) and thus everywhere by Lemma 3.7. Indeed otherwise

$$\begin{aligned} \min _{{\overline{\Sigma }}^+_{t^-}\cap {\overline{\Sigma }}^-_{t^+}}(\theta _2-\theta _1(\cdot + \lambda _0, \cdot ))>0. \end{aligned}$$

Then, recalling (3.36) and noticing also that \(\sup _{\Sigma _{ t^-}^-}\theta _1(\cdot + \lambda _0, \cdot )\le \sup _{\Sigma _{ t^-}^-}\theta _2<\sup J^-\) by (3.38), we may find \(\varepsilon >0\) so small that

$$\begin{aligned}{} & {} \min _{{\overline{\Sigma }}^+_{t^-}\cap {\overline{\Sigma }}^-_{t^+}}(\theta _2-\theta _1(\cdot + \lambda _0-\varepsilon , \cdot ))>0,\,\,\sup _{\Sigma _{t^-}^-}\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )<\sup J^-\text { and } \nonumber \\{} & {} \quad \sup _{\Sigma _{t^+}^+}\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )<\sup J^+. \end{aligned}$$
(3.39)

Taking into account also (3.35) and (3.36), we may apply Lemma 3.6 to infer that \(\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )\le \theta _2\) in \(\Sigma _{t^\pm }^\pm \) and in turn, thanks to the first condition in (3.37), \(\theta _1(\cdot + \lambda _0-\varepsilon , \cdot )\le \theta _2\) in \(\Sigma \). This contradicts the minimality of \(\lambda _0\) and the conclusion follows.\(\square \)

We are now ready to prove the main result of this section, showing that domain wall solutions in the sense of Definition 2.4 are unique up to horizontal translations and addition of integer multiples of \(\pi \), and coincide with the global minimizer constructed in Theorem 3.2, which is in turn unique.

Proof of Theorem 2.6

We only consider the case \(h=0\), the other one being analogous. We recall that by Lemma 3.4\(\ell ^-\), \(\ell ^+\in \frac{\pi }{2}{\mathbb {Z}}\), hence there are three possible cases: \(\ell ^--\ell ^+>\pi \), \(\ell ^--\ell ^+=\pi \), and \(\ell ^--\ell ^+=\frac{\pi }{2}\).

We start by showing that the first case cannot occur. Indeed, assume by contradiction that \(\ell ^--\ell ^+>\pi \) and recall that \({\tilde{\theta }}:=\theta +\pi \) is also a domain wall solution thanks to Remark 2.5(b). Moreover, \(\ell ^->\ell ^++\pi ={\tilde{\ell }}^+\). Then, arguing as at the beginning of the proof of Lemma 3.8 we may find \(\lambda \le 0\) such that \(\theta (\cdot +\lambda , \cdot )\) and \(\theta +\pi \) coincide at some point in \({\overline{\Sigma }}\) and thus everywhere by the Strong Maximum Principle Lemma 3.7. This is clearly impossible.

Let us now assume \(\ell ^--\ell ^+\le \pi \). First of all note that since \(\ell ^+\in \frac{\pi }{2}{\mathbb {Z}}\), upon replacing \(\theta \) by \(\theta +k\pi \) for a suitable \(k\in {\mathbb {Z}}\), we may assume thanks to Remark 2.5(b) that either \(\ell ^+=0\) or \(\ell ^+=-\frac{\pi }{2}\). Let us consider first the case \(\ell ^+=0\) and thus \(\ell ^-\in \{\frac{\pi }{2}, \pi \}\). Note that by the Strong Maximum Principle (Lemma 3.7) we may easily infer that \(\theta <\theta (\cdot +\lambda , \cdot )+\pi \) for all \(\lambda \in {\mathbb {R}}\). Indeed if not, it would be possible to find \(\lambda _0\in {\mathbb {R}}\) such that \(\theta \le \theta (\cdot +\lambda , \cdot )+\pi \), with the two functions coinciding at some point and therefore everywhere by Lemma 3.7, which is clearly impossible. In turn,

$$\begin{aligned} \theta \le \lim _{\lambda \rightarrow +\infty }\theta (\cdot +\lambda , \cdot )+\pi =\ell ^++\pi =\pi , \end{aligned}$$
(3.40)

and in fact the inequality is strict thanks to Lemma 3.7 and the fact that the constant function \(\pi \) is also a solution to (2.2).

Now recall that \(\theta _{min}\), the minimizer from Theorem 3.2, vanishes at \({x =} +\infty \) and converges to \(\pi \) at \({x = } -\infty \). In particular, thanks to Lemma 3.4 we have

$$\begin{aligned} \lim _{t\rightarrow -\infty }\Vert \theta _{min}-\pi \Vert _{L^\infty (\Sigma ^-_t)}=0\quad \text {and}\quad \lim _{t\rightarrow +\infty }\Vert \theta _{min}\Vert _{L^\infty (\Sigma ^+_t)}=0; \end{aligned}$$
(3.41)

moreover, \(0<\theta _{min}<\pi \) in \({\overline{\Sigma }}\). Thus, we may find \(t^-<t^+\) such that

$$\begin{aligned} \frac{3}{4}\pi<\theta _{min}<\pi \quad \text {in }{\overline{\Sigma }}^-_{t^-}\qquad \text {and}\qquad 0<\theta _{min}<\frac{\pi }{4}\quad \text {in }{\overline{\Sigma }}^+_{t^+}. \end{aligned}$$
(3.42)

Clearly, we also have that

$$\begin{aligned} m:=\min _{{\overline{\Sigma }}^-_{t^+}}\theta _{min}>0. \end{aligned}$$
(3.43)

Since by Lemma 3.4 we also have

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert \theta \Vert _{L^\infty (\Sigma ^+_t)}=0, \end{aligned}$$

we may now find \(\lambda >0\) so large that

$$\begin{aligned} -\frac{\pi }{4}< -m<\theta (\cdot +\lambda , \cdot )<m<\frac{\pi }{4} \quad \text {in } {\overline{\Sigma }}^+_{t^-}, \end{aligned}$$
(3.44)

where m is the constant in (3.43). We claim that

$$\begin{aligned} \theta (\cdot +\lambda , \cdot )\le \theta _{min} \quad \text {in }\Sigma . \end{aligned}$$
(3.45)

Indeed, (3.43) and (3.44) imply that the inequality holds in \(\Sigma ^+_{t^-}\cap \Sigma ^-_{t^+}\). It remains to show that the inequality \(\theta (\cdot +\lambda , \cdot )\le \theta _{min} \) holds also in \(\Sigma ^\pm _{t^\pm }\). Let us start with \(\Sigma ^+_{t^+}\). Recall that \(\theta (\cdot +\lambda , \cdot )< \theta _{min}\) on \(\{(x,y):x=t^+\}\cap {\overline{\Sigma }}\) thanks to (3.43) and (3.44). Note also that (3.44)) implies \(\sup _{\Sigma ^+_{t^+}}\theta (\cdot +\lambda , \cdot )<\frac{\pi }{4}\). As clearly \(\inf _{\Sigma ^+_{t^+}}\theta _{min}=0\), we may apply Lemma 3.6 with \(\theta _1= \theta (\cdot +\lambda , \cdot )\), \(\theta _2= \theta _{min}\), \(J=(-\frac{\pi }{4}, \frac{\pi }{4})\), to infer \(\theta (\cdot +\lambda , \cdot )\le \theta _{min} \) in \(\Sigma ^+_{t^+}\). Concerning \(\Sigma ^-_{t^-}\), observe that \(\sup _{\Sigma ^-_{t^-}}\theta (\cdot +\lambda , \cdot )\le \pi \) and \(\inf _{\Sigma ^-_{t^-}}\theta _{min}>\frac{3}{4}\pi \) by (3.40) and (3.42), respectively. Moreover, \(\theta (\cdot +\lambda , \cdot )< \theta _{min}\) on \(\{(x,y):x=t^-\}\cap {\overline{\Sigma }}\) thanks to (3.43) and (3.44). Thus we may apply again Lemma 3.6 with \(\theta _1\), \(\theta _2\) as before and \(J=(\frac{3}{4}\pi , \frac{5}{4}\pi )\) to conclude that the inequality holds also in \(\Sigma ^-_{t^-}\) and thus (3.45) is proven.

We are now in a position to apply Lemma 3.8 to deduce that there exists \({\bar{\lambda }}\in {\mathbb {R}}\) such that \(\theta (\cdot +{\bar{\lambda }}, \cdot )= \theta _{min}\) in \(\Sigma \).

Finally, the case \(\ell ^+=-\frac{\pi }{2}\) can be dealt with similarly by finding \(\lambda >0\) such that (3.45) holds and then by applying Lemma 3.8 to conclude. The argument to show the existence of a such a \(\lambda \) is similar as before, and in fact easier as we may take advantage of the fact that both limits at \({x = } \pm \infty \) of \(\theta {(x, \cdot )}\) are strictly smaller than the corresponding limits of \(\theta _{min}\). The details are left to the reader. \(\square \)

We now collect several corollaries. The first one is an immediate consequence of Theorems 3.2 and 2.6.

Corollary 3.9

The minimum problem (2.5) with \(k \in {\mathbb {N}}\) (see also Remark 2.2) admits a solution if and only if \(k=1\) in the case \(h=0\), and if and only if \(k=2\) in the case \(h>0\). Moreover, the solution is unique and coincides, up to a translation in the x-direction, with the function \(\theta _{min}\) provided by Theorem 3.2.

Setting \({\check{\theta }}_{min}(x,y):=\theta _{min}(-x,y)\), the previous corollary yields immediately the following result:

Corollary 3.10

Any minimizer m of (2.17) coincides, up to a translation in the x-direction, with either \((\cos \theta _{min}, \sin \theta _{min})\), or \((\cos \theta _{min}, -\sin \theta _{min})\), or \((\cos {\check{\theta }}_{min}, \) \( \sin {\check{\theta }}_{min})\), or \((\cos {\check{\theta }}_{min}, -\sin {\check{\theta }}_{min})\).

The next corollary deals with symmetry and decay properties of the domain wall profile \(\theta _{min}\).

Corollary 3.11

In addition to the properties stated in Theorem 3.2, the profile \(\theta _{min}\) minimizing (2.5) with \(k = 1\) for \(h = 0\), or \(k = 2\) for \(h > 0\) satisfies

  1. (a)

    (symmetry) \(\theta _{min}(x,y)=\theta _{min}(x,1-y)\) and \(\theta _{min}(x,y)=k\pi -\theta _{min}(-x,y)\) for all \((x, y) \in {\overline{\Sigma }}\);

  2. (b)

    (exponential decay at infinity) for every \(m\in {\mathbb {N}}\) there exist positive constants \(\alpha _m\), \(\beta _m\) such that

    $$\begin{aligned} \Vert \theta _{min}-k\pi \Vert _{C^m({\overline{\Sigma }}^-_{-t})}\le \alpha _m \textrm{e}^{-\beta _m t}\quad \text {and}\quad \Vert \theta _{min}\Vert _{C^m({\overline{\Sigma }} _t^+)}\le \alpha _m \textrm{e}^{-\beta _m t} \end{aligned}$$

    for all \(t>0\) sufficiently large.

Proof

Observing that \(\theta _{min}(\cdot , 1-\cdot )\) is still a domain wall solution satisfying the normalization condition \(\int _0^1\theta _{min}(0,y)\,\hbox {d}y=\frac{k\pi }{2}\), the first symmetry property follows at once from the uniqueness result of Theorem 2.6. The second symmetry property is proven in a similar way, observing that \(k\pi -\theta _{min}(-\cdot ,\cdot )\) is also a domain wall solution satisfying the same normalization condition. This concludes the proof of part a) of the corollary.

In order to prove the second part, we employ a barrier argument. Clearly, by the symmetry property established in part a) it is enough to show the exponential decay as \(x \rightarrow +\infty \). To this aim, we fix \(\varepsilon _0>0\) so small that

$$\begin{aligned} \sin (2\theta )\ge \theta \quad \text {for all }\theta \in (0, \varepsilon _0), \end{aligned}$$
(3.46)

and choose \({\bar{t}}>0\) so large that

$$\begin{aligned} 0<\theta _{min}<\varepsilon _0\quad \text {in }{\overline{\Sigma }}^+_{{\bar{t}}}. \end{aligned}$$
(3.47)

Recall that this is possible due to the fact that \(\Vert \theta _{min}\Vert _{L^\infty (\Sigma _t^+)}\rightarrow 0\) as \(t\rightarrow +\infty \). We now define the barrier \(\theta ^+\) in \(\Sigma ^+_{{\bar{t}}}\) as

$$\begin{aligned} \theta ^+(x,y):=\varepsilon _0\psi (y)\textrm{e}^{-\alpha (x-{\bar{t}})}, \end{aligned}$$

where

$$\begin{aligned} \psi (y):=1+{\frac{1}{2} \gamma y (1 - y), } \end{aligned}$$

and \(\alpha =\alpha (\gamma )>0\) is a constant sufficiently small so that

$$\begin{aligned} \Delta \theta ^+(x,y)= \varepsilon _0 \textrm{e}^{-\alpha (x-{\bar{t}})}[\alpha ^2 \psi (y)- \gamma ]\le \varepsilon _0 \textrm{e}^{-\alpha (x-\bar{t})}\Big [\alpha ^2\Big (1+\frac{\gamma }{8}\Big ) - \gamma \Big ]<0\,. \end{aligned}$$

With such a choice of \(\alpha \), \(\theta ^+\) satisfies by construction

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \theta ^+<0 &{} \text {in } \Sigma ^+_{{\bar{t}}},\\ \partial _\nu \theta ^+=-\frac{\gamma }{2}\theta ^+ &{} \text {on } \partial \Sigma ^+_{{\bar{t}}}\cap \partial \Sigma ,\\ \theta ^+=\varepsilon _0\psi \ge \varepsilon _0 &{} \text {on }\Gamma _{{\bar{t}}}. \end{array}\right. } \end{aligned}$$
(3.48)

In particular,

$$\begin{aligned} \int _{\Sigma ^+_{{\bar{t}}}}\nabla \theta ^+\cdot \nabla \varphi \, \hbox {d}^2 r+\gamma \int _{\partial \Sigma ^+_{\bar{t}}\cap \partial \Sigma }\frac{\theta ^+}{2}\varphi \,\hbox {d}{\mathcal {H}}^1\ge 0 \end{aligned}$$
(3.49)

for all non-negative \(\varphi \in H^1(\Sigma ^+_{\bar{t}})\) with bounded support and vanishing on \(\Gamma _{{\bar{t}}}\). For any fixed \(\eta >0\), consider the test function \(\varphi _\eta :=(\theta _{min}-\theta ^+-\eta )^+\) defined in \(\Sigma ^+_{{\bar{t}}}\) and note that thanks to (3.47) and the last condition in (3.48), \(\varphi _\eta =0\) on \(\Gamma _{\bar{t}}\) so that it can be extended by 0 to the whole \(\Sigma \). Moreover, by the uniform convergence to 0 of \(\theta _{min}(x, \cdot )-\theta ^+\) as \(x \rightarrow +\infty \), we have that \(\varphi _\eta \) has bounded support in \({\overline{\Sigma }}^+_{{\bar{t}}}\). Plugging \(\varphi _\eta \) into (2.3), with \(\theta =\theta _{min}\), and also into (3.49), and subtracting the two resulting inequalities, we get

$$\begin{aligned}{} & {} \int _{\Sigma _{{\bar{t}}}^+}|\nabla \varphi _\eta |^2\, \hbox {d}^2 r + h\int _{\{\theta _{min}-\theta ^+>\eta \}\cap \Sigma ^+_{\bar{t}}}\sin (\theta _{min})\varphi _\eta \, \hbox {d}^2 r \\{} & {} \quad +\gamma \int _{\{\theta _{min}-\theta ^+>\eta \}\cap (\partial \Sigma ^+_{\bar{t}}\cap \partial \Sigma )} \left( \sin (2\theta _{min})-\frac{\theta ^+}{2} \right) \varphi _\eta \, \hbox {d}{\mathcal {H}}^1\le 0. \end{aligned}$$

Note that both \(\sin (\theta _{min})\) and \(\sin (2\theta _{min})-\frac{\theta ^+}{2}\) are strictly positive in \(\{\theta _{min}-\theta ^+>\eta \} \cap \Sigma ^+_{{\bar{t}}} \) (if nonempty), thanks to (3.46) and (3.47). Thus for the above integral inequality to hold it is necessary that \(\nabla \varphi _\eta \equiv 0\) in \(\Sigma _{{\bar{t}}}^+\) and that the sets \(\{\theta _{min}-\theta ^+>\eta \}\cap \Sigma ^+_{{\bar{t}}}\) and \(\{\theta _{min}-\theta ^+>\eta \}\cap (\partial \Sigma ^+_{{\bar{t}}}\cap \partial \Sigma )\) have vanishing measures. Thus, \(\varphi _\eta \equiv 0\), that is, \(\theta _{\min }-\theta ^+\le \eta \) in \(\Sigma _{{\bar{t}}}^+\). From the arbitrariness of \(\eta \), we may conclude that \(\theta _{\min }\le \theta ^+\) in \(\Sigma _{{\bar{t}}}^+\) and thus

$$\begin{aligned} \Vert \theta _{min}\Vert _{L^\infty (\Sigma ^+_t)}\le \varepsilon _0\Big (1+\frac{\gamma }{8}\Big ) \textrm{e}^{\alpha {\bar{t}}} \textrm{e}^{-\alpha t}. \end{aligned}$$
(3.50)

for \(t>{\bar{t}}\). The exponential decay with respect to any \(C^m\)-norm follows now from (3.50) by an interpolation argument, taking into account that by Lemma 3.4(a) for every \(m \in {\mathbb {N}}\) there exists a constant \(C_m>0\) such that \(\Vert \theta _{min}\Vert _{C^m ({\overline{\Sigma }}^+_{{\bar{t}}})}\le C_m\). \(\square \)

Proof of Theorem 2.3

Finally, combining the results of Corollary 3.9 and Corollary 3.11 yields the conclusion of Theorem 2.3. \(\square \)

3.3 Limiting regimes

We now turn to the analysis of the minimizers of F for \(h = 0\) in the two extremes of the values of \(\gamma \) covered by Theorem 2.7.

Proof of item a) of Theorem 2.7

We show that as \(\gamma \rightarrow 0\) we have \(\theta _{min,\gamma }(x /\sqrt{\gamma },y) \rightarrow \pi - 2 \arctan (\hbox {e}^{2x})\) locally uniformly in \({(x, y) \in } \Sigma \). Rescaling the x coordinate as \({\tilde{x}} = \sqrt{\gamma } x\) and defining \({\tilde{\theta }}({\tilde{x}}, y):= \theta (x,y)\), we obtain

$$\begin{aligned} \begin{aligned} {\tilde{F}}_\gamma ({\tilde{\theta }}):= \frac{1}{\sqrt{\gamma }} \, F(\theta )&=\frac{1}{2} {\int _0^1 \int _{\mathbb {R}}} \left( |\partial _{{\tilde{x}}} {\tilde{\theta }}|^2 + \frac{1}{\gamma } |\partial _{y} {\tilde{\theta }}|^2 \right) \text{ d } {\tilde{x}}\, \text{ d } y \\ {}&\quad + \int _{\mathbb {R}}\left( \sin ^2{\tilde{\theta }}({\tilde{x}},0) + \sin ^2{\tilde{\theta }}({\tilde{x}},1) \right) \text{ d }{\tilde{x}}\,. \end{aligned} \end{aligned}$$
(3.51)

For \({\bar{\theta }} \in H^1_{loc}({\mathbb {R}})\), we can also define \({G({\bar{\theta }})}\) as

$$\begin{aligned} G ({\bar{\theta }}) {:=} \int _{\mathbb {R}}\left( \frac{1}{2} |{{\bar{\theta }}}'|^2 +2 \sin ^2 {({\bar{\theta }})} \right) \hbox {d}x. \end{aligned}$$

Notice that if \({\tilde{\theta }}(x, y) = {\bar{\theta }}(x)\), then \({\tilde{F}}_\gamma ({\tilde{\theta }}) = G({\bar{\theta }})\). Therefore, if \(\theta _{min, \gamma }\) is a minimizer of the energy \(F(\theta )\) for a fixed \({\gamma > 0}\) and \(\theta _{min, \gamma }(0,\cdot )=\frac{\pi }{2}\) then it is clear that \(\tilde{F}_\gamma ({\tilde{\theta }}_{min, \gamma })\) is bounded independently of \(\gamma \). This implies that \({|\nabla {\tilde{\theta }}_{min, \gamma }|}\) is bounded in \({L^2} (\Sigma )\), and \(\partial _y \tilde{\theta }_{min, \gamma } \rightarrow 0\) in \(L^2(\Sigma )\) as \(\gamma \rightarrow 0\). It follows that there is a subsequence (not relabelled) such that \({\tilde{\theta }}_{min, \gamma } \rightharpoonup \theta _*\) weakly in \(H^1_{l}(\Sigma )\) and \({\tilde{\theta }}_{min, \gamma } \rightarrow \theta _*\) in \(L^2_{loc}(\partial \Sigma )\) (see, e.g., [1]) with \(\theta _* (x,y) = {\bar{\theta }}_* (x)\) for some \({\bar{\theta }}_* \in H^1_{loc}({\mathbb {R}})\).

We observe that \({\bar{\theta }}_*\) is a minimizer of the energy G in the class

$$\begin{aligned} {{\mathcal {A}}_1^{1d}}:= \left\{ {\bar{\theta }}\in H^1_{loc}({\mathbb {R}}):\, |{\bar{\theta }}'|\in L^2({\mathbb {R}}),\ \lim _{x\rightarrow +\infty } {\bar{\theta }}(x)=0, \ \lim _{x\rightarrow -\infty } {\bar{\theta }}(x)=\pi , \ {\bar{\theta }}(0)=\frac{\pi }{2} \right\} . \end{aligned}$$

Indeed, for any \({\bar{\theta }} \in {\mathcal {A}}_1^{1d}\) and \(\theta (x,y) ={\bar{\theta }}(x)\) we have \(\theta \in {\mathcal {A}}_1\) and

$$\begin{aligned} G({\bar{\theta }})= \liminf _{\gamma \rightarrow 0} {\tilde{F}}_\gamma (\theta ) \ge \liminf _{\gamma \rightarrow 0} {\tilde{F}}_\gamma ({\tilde{\theta }}_{min,\gamma }) \ge G({\bar{\theta }}_*). \end{aligned}$$

Therefore

$$\begin{aligned} {\bar{\theta }}_* (x) = \pi - 2 \arctan (\hbox {e}^{2x}) \end{aligned}$$

is the unique minimizer of G in \({{\mathcal {A}}_1^{1d}}\) and we deduce that \({\tilde{\theta }}_{min, \gamma } \rightarrow \theta _*\) in \(H^1_l(\Sigma )\) for the whole sequence.

Finally, we note that by the strong convergence of \(\tilde{\theta }_{min,\gamma }\) to \(\theta _*\) in \(L^2_{loc}(\partial \Sigma )\), monotonicity of \({\tilde{\theta }}_{min, \gamma }(\cdot , 0)\), and continuity and decay at infinity of \(\theta _*\) we also have that \({\tilde{\theta }}_{min,\gamma } \rightarrow \theta _*\) uniformly in \(\partial \Sigma \). Therefore, since \(\theta _{min,\gamma }\) is harmonic in \(\Sigma \), with the help of the representation

$$\begin{aligned} {\tilde{\theta }}_{min,\gamma } (x, y) = \int _{-\infty }^\infty P_\gamma (x - x', y) \, {\tilde{\theta }}_{min,\gamma } (x', 0) \, \hbox {d}x' \end{aligned}$$

from (A.6), where \(P_\gamma (x, y):= \gamma ^{-1/2} P(\gamma ^{-1/2} x, y)\) and P(xy) is the Poisson kernel given in (A.7), the assertion easily follows by observing that \(\Vert P_\gamma (\cdot , y) \Vert _{L^1({\mathbb {R}})} = 1\) and \(P_\gamma (\cdot , y)\) approaches a Dirac delta-function for every \(y \in (0,1)\) as \(\gamma \rightarrow 0\), together with uniform bounds on the derivatives of

$$\begin{aligned} \theta _{*, \gamma } (x, y) := \int _{-\infty }^\infty P_\gamma (x - x', y) \, {\bar{\theta }}_* (x') \, \hbox {d}x' \end{aligned}$$

away from \(\partial \Sigma \). \(\square \)

Proof of item b) of Theorem 2.7

We first show that as \(\gamma \rightarrow \infty \), we have \(\theta _{min,\gamma }(x, 0) \rightarrow {\bar{\theta }}_0(x)\) for all \(x \in {\mathbb {R}}\), where

$$\begin{aligned} {\bar{\theta }}_0(x) := {\left\{ \begin{array}{ll} \pi , &{} x < 0, \\ {\pi \over 2}, &{} x = 0, \\ 0, &{} x > 0. \end{array}\right. } \end{aligned}$$
(3.52)

To see this, for \(0 \le \varepsilon < \frac{1}{2}\) consider a test function

$$\begin{aligned} \theta _\varepsilon (x, y) := {\pi \over 2} - \arctan \left( {\sinh (\pi (1 - 2 \varepsilon ) x) \over \sin (\pi ((1 - 2 \varepsilon ) y + \varepsilon ))} \right) \end{aligned}$$
(3.53)

Notice that \(\theta _\varepsilon \in C^\infty ({\overline{\Sigma }})\) for all \(0< \varepsilon < \frac{1}{2}\) and is harmonic in \(\Sigma \). Furthermore, in this range of \(\varepsilon \) we have \(\theta _\varepsilon (x, \cdot ) \rightarrow 0\) exponentially as \(x \rightarrow +\infty \) together with all its derivatives, and \(\theta _\varepsilon (x, \cdot ) \rightarrow \pi \) exponentially as \(x \rightarrow -\infty \). In particular, \(\theta _\varepsilon \in {\mathcal {A}}_1\) for \(0< \varepsilon < \frac{1}{2}\), and using symmetry of \(\theta _\varepsilon \) we have

$$\begin{aligned} F(\theta _\varepsilon )&= \int _{-\infty }^\infty \int _0^{1/2} |\nabla \theta _\varepsilon (x, y)|^2 \hbox {d}y \, \hbox {d}x + 2 \gamma \int _{-\infty }^\infty \sin ^2 \theta _\varepsilon (x, 0) \, \hbox {d}x \nonumber \\&= \int _{-\infty }^\infty \left( \frac{\pi }{2} - \theta _\varepsilon (x, 0) \right) \partial _y \theta _\varepsilon (x, 0) \, \hbox {d}x + 2 \gamma \int _{-\infty }^\infty \sin ^2 \theta _\varepsilon (x, 0) \, \hbox {d}x, \end{aligned}$$
(3.54)

where to go to the second line we integrated by parts.

By an explicit computation we get

$$\begin{aligned} \theta _\varepsilon (x, 0)&= {\pi \over 2} - \arctan \left( {\sinh (\pi (1 - 2 \varepsilon ) x) \over \sin (\pi \varepsilon )} \right) , \end{aligned}$$
(3.55)
$$\begin{aligned} \partial _y \theta _\varepsilon (x, 0)&= -\frac{2 \pi (1-2 \varepsilon ) \cos (\pi \varepsilon ) \sinh (\pi (1-2 \varepsilon ) x)}{\cos (2 \pi \varepsilon )-\cosh (2 \pi (1-2 \varepsilon ) x)}, \end{aligned}$$
(3.56)
$$\begin{aligned} \sin ^2 \theta _\varepsilon (x, 0)&= \frac{1}{\csc ^2(\pi \varepsilon ) \sinh ^2(\pi (1-2 \varepsilon ) x)+1} . \end{aligned}$$
(3.57)

In particular, as \(\varepsilon \rightarrow 0\), it holds that

$$\begin{aligned} \theta _\varepsilon (\varepsilon x, 0)&\simeq {\pi \over 2} - \arctan x, \end{aligned}$$
(3.58)
$$\begin{aligned} \varepsilon \partial _y \theta _\varepsilon (\varepsilon x, 0)&\simeq {x \over 1 + x^2}, \end{aligned}$$
(3.59)
$$\begin{aligned} \sin ^2 \theta _\varepsilon (\varepsilon x, 0)&\simeq \frac{1}{1 + x^2} . \end{aligned}$$
(3.60)

Therefore, by standard asymptotic techniques for integrals, we obtain, as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} F(\theta _\varepsilon ) \simeq \pi \log \varepsilon ^{-1} + 2 \pi \gamma \varepsilon , \end{aligned}$$
(3.61)

and choosing \(\varepsilon = \gamma ^{-1}\) yields

$$\begin{aligned} 2 \gamma \int _{-\infty }^\infty \sin ^2 \theta _{min,\gamma }(x, 0) \, \hbox {d}x \le F(\theta _{min,\gamma }) \le F(\theta _\varepsilon ) \le 2 \pi \log \gamma \end{aligned}$$
(3.62)

for all \(\gamma \) sufficiently large. Thus in view of monotonicity of \(\theta _{min,\gamma }\) we have \(\theta _{min,\gamma }(x, 0) \rightarrow {\bar{\theta }}_0(x)\) for all \(x \in {\mathbb {R}}\) as \(\gamma \rightarrow \infty \). Furthermore, this convergence is locally uniform in \({{\overline{{\mathbb {R}}}}} {\setminus } \{0\}\).

Notice that \(\theta _0\) defined in (3.53) is the harmonic extension of \({\bar{\theta }}_0\) from \(\partial \Sigma \) to \(\Sigma \). Furthermore, by direct computation

$$\begin{aligned} \theta _0(x, y) = \int _{-\infty }^\infty P(x - x', y) \, {\bar{\theta }}_0(x') \, \hbox {d}x', \end{aligned}$$
(3.63)

where P(xy) is the Poisson kernel defined in (A.7). Notice that \(P(x, y) \simeq {y \over \pi (x^2 + y^2)}\) for \(|x|, |y| \ll 1\), and \(P(\cdot , y)\) decays exponentially at infinity for all \(y \in (0,1)\). Therefore, by the representation

$$\begin{aligned} \theta _{min,\gamma }(x, y) = \int _{-\infty }^\infty P(x - x', y) \, \theta _{min,\gamma }(x', 0) \, \hbox {d}x' \end{aligned}$$
(3.64)

from (A.6) and locally uniform convergence of \(\theta _{min,\gamma }(x, 0)\) to \({\bar{\theta }}_0(x)\) in \({{\overline{{\mathbb {R}}}}} {\setminus } \{0\}\), we conclude that \(\theta _{min,\gamma } \rightarrow \theta _0\) locally uniformly in \(\Sigma \) as \(\gamma \rightarrow \infty \). \(\square \)

4 Analysis of the reduced two-dimensional micromagnetic model

We now turn to the analysis of the relationship between the minimizers of the reduced micromagnetic model introduced in (2.10) and those of the thin film limit model in (2.16). In what follows it is understood that both \(E_\varepsilon \) and \(E_0\) are defined for any function in \(L^2_{loc}(\Sigma ; {\mathbb {S}}^1)\) simply by setting them equal to \(+\infty \) outside \({\mathfrak {M}}\) and \(H^1_{l}(\Sigma ; {\mathbb {S}}^1)\), respectively. Note that \(\{m\in L^2_{loc}(\Sigma ; {\mathbb {S}}^1):\, E_\varepsilon (m)<+\infty \}\) is a strict subset of \(H^1_{l}(\Sigma ; {\mathbb {S}}^1)\), and the same is true for \(E_0\).

In what follows, we assume that, if not otherwise specified, C is a positive constant that might depend only on \(\gamma \), h and \(\Vert \eta '\Vert _{\infty }\). We also denote by \({\mathscr {F}}(f)\) the Fourier transform of \(f \in L^2({\mathbb {R}}^2)\), defined as

$$\begin{aligned} {\mathscr {F}}(f)({\textbf{k}}) := \int _{{\mathbb {R}}^2} \hbox {e}^{-i {\textbf{k}} \cdot {\textbf{r}}} f({\textbf{r}}) \, \hbox {d}^2 r \end{aligned}$$
(4.1)

for \(f \in {L^1({\mathbb {R}}^2) \cap L^2({\mathbb {R}}^2)}\).

We start with several simple lemmas which will be useful in handling an unbounded domain \(\Sigma \). We provide proofs for the reader’s convenience. Recall that \([ w ]_{\mathring{H}^{1/2}({\mathbb {R}})}\) refers to the Gagliardo seminorm of w defined in (3.25).

Lemma 4.1

There exists \(C > 0\) such that, for all \(w \in {H}^1(\Sigma )\) and all \(y \in [0,1]\), it holds that

$$\begin{aligned} {[} w(\cdot , y)]^2_{\mathring{H}^{1/2}{({\mathbb {R}})}} + \Vert w(\cdot , y) \Vert _{L^2({\mathbb {R}})}^2&\le C (\Vert \nabla w \Vert _{L^2{(\Sigma )}}^2 + \Vert w \Vert _{L^2{(\Sigma )}}^2), \end{aligned}$$
(4.2)
$$\begin{aligned} \Vert w \Vert _{L^2(\Sigma )}^2&\le C (\Vert \nabla w \Vert _{L^2(\Sigma )}^2+\Vert w(\cdot , y) \Vert ^2_{L^2({\mathbb {R}})}), \end{aligned}$$
(4.3)

where \(w(\cdot , y)\) is understood in the sense of trace.

Proof

By a reflection with respect to the lines \(y = 0\) and \(y = 1\) followed by a multiplication by a smooth cutoff function \(\phi (y)\) that vanishes outside \([-2,2]\), we may extend w to a function \({\tilde{w}} \in H^1({\mathbb {R}}^2)\) such that \(w = {\tilde{w}}\) in \(\Sigma \) and \(\Vert {\tilde{w}} \Vert _{H^1({\mathbb {R}}^2)} \le C \Vert w \Vert _{H^1(\Sigma )}\) for some universal \(C > 0\). Therefore, by a density argument we may assume that \(w \in C^\infty _c({\mathbb {R}}^2)\) throughout the rest of the proof.

To prove (4.2), without loss of generality we may assume that \(y = 0\). Letting \({\hat{w}}:= {\mathscr {F}}(w)\) and using the Fourier inversion formula, we get

$$\begin{aligned} w(x,0)= & {} \frac{1}{(2\pi )^2} \int _{{\mathbb {R}}^2} \hbox {e}^{i k_1 x } {\hat{w}}(k_1, k_2) \, dk_1 \, dk_2 \\= & {} \frac{1}{(2\pi )^2} \int _{\mathbb {R}}\hbox {e}^{i k_1 x}\left( \int _{\mathbb {R}}{\hat{w}}(k_1, k_2) \, dk_2 \right) \, dk_1. \end{aligned}$$

Therefore, the one-dimensional Fourier transform \({\hat{v}}(k)\) of \(v(x):= w(x, 0)\) equals

$$\begin{aligned} {\hat{v}}(k):= \int _{\mathbb {R}}\hbox {e}^{-i k x} w(x, 0) \, \hbox {d}x = \frac{1}{2\pi } \int _{\mathbb {R}}{\hat{w}}(k, s) \, \hbox {d}s. \end{aligned}$$

Using Cauchy–Schwarz inequality, we thus obtain

$$\begin{aligned} \left| {\hat{v}}(k) \right| ^2= & {} \frac{1}{(2\pi )^2} \left| \int _{\mathbb {R}}{\hat{w}}(k, s)\, \hbox {d}s \right| ^2 \\\le & {} \frac{1}{(2\pi )^2} \int _{\mathbb {R}}\frac{\hbox {d}s}{1+ k^2 + s^2} \int _{\mathbb {R}}|{\hat{w}}(k, s)|^2 (1+ k^2 + s^2) \, \hbox {d}s. \end{aligned}$$

In turn, using the fact that \( \int _{\mathbb {R}}\frac{\hbox {d}s}{1+ k^2 + s^2} = \frac{\pi }{\sqrt{1+ k^2}}\) we deduce that

$$\begin{aligned} (1+ |k|)\left| {\hat{v}}(k) \right| ^2 \le 2 \sqrt{1+ k^2}\left| {\hat{v}}(k) \right| ^2 \le \frac{1}{2\pi } \int _{\mathbb {R}}|{\hat{w}}(k, s)|^2 (1+ k^2 + s^2) \, \hbox {d}s. \end{aligned}$$

Finally, integrating the above inequality in k and using the Fourier representations of the \(H^1({\mathbb {R}}^2)\) and \(H^{1/2}({\mathbb {R}})\) norms [46] we obtain the desired inequality.

We now turn to (4.3). By Young’s and Jensen’s inequalities, for every \(x \in {\mathbb {R}}\) and \(y' \in [0,1]\) we have

$$\begin{aligned} |w(x,y')|^2 = \left| w(x,y) + \int _y^{y'} \partial _s w(x,s)\, \hbox {d}s \right| ^2 \le 2 |w(x,y)|^2 + 2 \int _0^1 |\partial _s w(x,s)|^2 \, \hbox {d}s. \end{aligned}$$

Therefore, integrating over x and \(y'\) yields (4.3). \(\square \)

Lemma 4.2

For any \(a, b>0\) we have

$$\begin{aligned} \int _0^{\infty }\frac{\textrm{e}^{-a\sqrt{x^2+b^2}}}{\sqrt{x^2+b^2}}\, \textrm{d}x=K_0(ab), \end{aligned}$$

where \(K_0(z)\) is the modified Bessel function of the second kind of order zero.

Proof

The identity follows from the integral representation [25, 8.432-1] of \(K_0(z)\) by the change of variable \(x=b\sinh t\). \(\square \)

Lemma 4.3

For any \(a>0\) we have

$$\begin{aligned} {\mathscr {F}}\Big (\frac{\hbox {e}^{-a|{\textbf{r}}|}}{2 \pi |\textbf{r}|}\Big )({\textbf{k}})=\frac{1}{\sqrt{a^2+|{\textbf{k}}|^2}}. \end{aligned}$$

Proof

Denoting by \(J_0(z)\) the Bessel function of the first kind of order zero, recall that for every \(t\in {\mathbb {R}}\) we have

$$\begin{aligned} J_0(t)=\frac{1}{2\pi }\int _0^{2\pi }\hbox {e}^{-i t \cos \theta }\, \hbox {d}\theta , \end{aligned}$$

see [25, 8.411]. Therefore,

$$\begin{aligned} {\mathscr {F}}\Big (\frac{\hbox {e}^{-a|{\textbf{r}}|}}{2 \pi |\textbf{r}|}\Big )({\textbf{k}})&=\frac{1}{2\pi }\int _0^{\infty } \left( \int _0^{2\pi }\hbox {e}^{-i r|{\textbf{k}}| \cos \theta }\, \hbox {d}\theta \right) \hbox {e}^{-ar}\,\hbox {d}r\\&=\int _0^{\infty }J_0(r|{\textbf{k}}|)\,\, \hbox {e}^{-ar}\, \hbox {d}r= \frac{1}{\sqrt{a^2+|{\textbf{k}}|^2}}\,, \end{aligned}$$

where the last equality follows from [25, 6.611-1]. \(\square \)

We now proceed towards the proof of Theorem 2.9. We first establish the following result:

Proposition 4.4

There exists \(\varepsilon _0>0\) and \(C>0\) depending only on \(\Vert \eta '\Vert _\infty \) such that for all \(\varepsilon \in (0, \varepsilon _0)\) and \(m\in {\mathfrak {M}}\), the following inequality holds:

$$\begin{aligned} \begin{aligned} \frac{1}{|\ln \varepsilon |} {\int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big ({\text {div} (\eta _\varepsilon m)\big )}|^2}{2 \pi |{\textbf {k}}|} \, \text {d}^2 k}&\ge 2(1-\beta ) \Big (\int _{\mathbb {R}}m^2_{2} (x,0)\, \text {d}x + \int _{\mathbb {R}}m^2_{2} (x,1)\, \text {d}x\Big ) \\ {}&\quad - \frac{C}{\beta |\ln \varepsilon |}(\Vert \nabla m\Vert ^2_{L^2(\Sigma )} + \Vert m_{2}\Vert ^2_{L^2(\Sigma )}) \end{aligned} \end{aligned}$$
(4.4)

for all \(\beta \in (0, 1)\).

Proof. We first note that extending m by zero outside \(\Sigma \) we have \(m\eta _\varepsilon \in H^1_{loc} ({\mathbb {R}}^2; {\mathbb {R}}^2)\). Furthermore, due to our assumptions on m we get \(\text {div}\, (\eta _\varepsilon m) = \eta _\varepsilon \text {div} \, m + \eta _\varepsilon ' m_2 \in L^2 ({\mathbb {R}}^2) \) and, therefore, its Fourier transform makes sense in \(L^2({\mathbb {R}}^2)\) [46]. We next fix \(0<a \le 1\) to obtain

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big ({\text {div} (\eta _\varepsilon m)\big )}|^2}{|{{\textbf {k}}}|} \, {\text{ d}^2 k \over (2 \pi )^2} \ge \int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big ({\text {div} (\eta _\varepsilon m)\big )}|^2}{\sqrt{|{\textbf {k}}|^2 + a^2}} \, {\text{ d}^2 k \over (2 \pi )^2}. \end{aligned} \end{aligned}$$
(4.5)

Thus, using Lemma 4.3, we have [46, Theorem 5.8]

$$\begin{aligned} \begin{aligned} {\int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big ({\text {div} (\eta _\varepsilon m)\big )}|^2}{2 \pi |{\textbf {k}}|} \, \text{ d}^2 k} \ge \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2} \text {div} (\eta _\varepsilon m)({\textbf {r}}) \, \text {div} (\eta _\varepsilon m)({{\textbf {r}}}') \frac{\text{ e}^{-a |{{\textbf {r}}} - {{\textbf {r}}}'|}}{|{{\textbf {r}}} - {{\textbf {r}}}'|} \, \text{ d}^2 r \, \text{ d}^2 r'. \end{aligned}\nonumber \\ \end{aligned}$$
(4.6)

The above trick allows us to control the behavior of the expression under the integral at infinity and significantly simplifies the subsequent analysis of the magnetostatic energy, essentially reducing it to the analysis on compact domains.

We now define

$$\begin{aligned} {\mathcal {K}}_a({\textbf{r}}-\textbf{r}'):= \frac{\hbox {e}^{-a|{\textbf{r}}-{\textbf{r}} '|}}{| {\textbf{r}} - {\textbf{r}}'|} \end{aligned}$$
(4.7)

and proceed to write the integral in the right-hand side of (4.6) as

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2} \text {div} (\eta _\varepsilon m)({{\textbf {r}}}) \, \text {div} (\eta _\varepsilon m)({{\textbf {r}}}') {\mathcal {K}}_a({{\textbf {r}}} -{\textbf {r}}') \, \text{ d}^2 r \, \text{ d}^2 r' = I_1+2 I_2+I_3, \end{aligned} \end{aligned}$$
(4.8)

where

$$\begin{aligned} \begin{aligned} I_1&:= \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2}\eta _\varepsilon ({{{\textbf {r}}}}) \text {div} (m)({{\textbf {r}}}) \, \eta _\varepsilon ({{{\textbf {r}}}}') \text {div} (m)({\textbf {r}}') {\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}} ') \, \text{ d}^2 r \, \text{ d}^2 r', \\ I_2&:= \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2} \eta _\varepsilon ({{{\textbf {r}}}}) \text {div} (m)({{\textbf {r}}}) \, (\nabla \eta _\varepsilon \cdot m)({{\textbf {r}}}') {\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}} ') \, \text{ d}^2 r \, \text{ d}^2 r', \\ I_3&:= \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2} (\nabla \eta _\varepsilon \cdot m)({{\textbf {r}}}) \, (\nabla \eta _\varepsilon \cdot m)({{\textbf {r}}}') {\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}} ') \, \text{ d}^2 r \, \text{ d}^2 r'. \end{aligned} \end{aligned}$$
(4.9)

Using the Fourier representation and Young’s inequality, one can see that

$$\begin{aligned} -\frac{1}{\beta } I_1 - \beta I_3\le 2 I_2 \le \frac{1}{\beta } I_1 +\beta I_3, \end{aligned}$$

for any \(\beta >0\). Therefore, we have

$$\begin{aligned} \begin{aligned} (1-\beta ^{-1}) I_1+ (1-\beta ) I_3\le&{} \int _{{\mathbb {R}}^2} \int _{{\mathbb {R}}^2} \text {div} (\eta _\varepsilon m)({{\textbf {r}}}) \, \text {div} (\eta _\varepsilon m)({{\textbf {r}}}') {\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}} ') \, \text{ d}^2 r \, \text{ d}^2 r' \\\le&{} (1+\beta ^{-1}) I_1+ (1+\beta ) I_3. \end{aligned} \end{aligned}$$
(4.10)

Using Young’s inequality for convolutions, we can estimate

$$\begin{aligned} \begin{aligned} I_1 \le \Vert {\mathcal {K}}_a\Vert _{L^1({\mathbb {R}}^2)}\Vert \text {div} \, m\Vert ^2_{L^2(\Sigma )} \le \frac{4\pi }{a} \Vert \nabla m\Vert ^2_{L^2(\Sigma )}. \end{aligned} \end{aligned}$$
(4.11)

In order to estimate \(I_3\) we write

$$\begin{aligned} I_3 = J_1+2J_2+J_3, \end{aligned}$$
(4.12)

where

$$\begin{aligned} \begin{aligned} J_1&:=\frac{1}{\varepsilon ^2} \int _{{\mathbb {R}}\times [0,\varepsilon ]}\int _{{\mathbb {R}}\times [0, \varepsilon ]} \eta '(y/\varepsilon ) m_2 ({\textbf{r}}) \, \eta '(y'/\varepsilon ) m_2({\textbf{r}}') {\mathcal {K}}_a({\textbf{r}} -{\textbf{r}} ') \, \hbox {d}^2 r \, \hbox {d}^2 r', \\ J_2&:= \frac{1}{\varepsilon ^2} \int _{{\mathbb {R}}\times [0,\varepsilon ]}\int _{{\mathbb {R}}\times [1-\varepsilon ,1]} \eta '(y/\varepsilon ) m_2 ({\textbf{r}}) \, \eta '(y'/\varepsilon ) m_2({\textbf{r}}') {\mathcal {K}}_a({\textbf{r}} -{\textbf{r}} ') \, \hbox {d}^2 r \, \hbox {d}^2 r', \\ J_3&:= \frac{1}{\varepsilon ^2} \int _{{\mathbb {R}}\times [1-\varepsilon ,1]}\int _{{\mathbb {R}}\times [1-\varepsilon ,1]} \eta '(y/\varepsilon ) m_2 ({\textbf{r}}) \, \eta '(y'/\varepsilon ) m_2({\textbf{r}}') {\mathcal {K}}_a({\textbf{r}} -{\textbf{r}} ') \, \hbox {d}^2 r \, \hbox {d}^2 r'. \end{aligned}\nonumber \\ \end{aligned}$$
(4.13)

We would like to show that \(J_2\) is negligible compared to \(J_1\) and \(J_3\). Using Young’s inequality for convolutions, it is straightforward to see that for \(\varepsilon \) sufficiently small

$$\begin{aligned} \begin{aligned} J_2&\le \frac{C}{\varepsilon ^2} \int _0^\varepsilon \int _{1-\varepsilon }^1\int _{{\mathbb {R}}}\int _{{\mathbb {R}}} |m_2 (x,y)| \, |m_2 (x', y')| \frac{\hbox {e}^{-a|x-x'|}}{\sqrt{|x-x'|^2 +1/2}} \, \hbox {d}x \, \hbox {d}x' \hbox {d}y \, \hbox {d}y'\, \\&\le \frac{C}{a\varepsilon ^2} \int _0^\varepsilon \Vert m_2 (\cdot , y) \Vert _{L^2({\mathbb {R}})} \, \hbox {d}y\int _{1-\varepsilon }^1\Vert m_2 (\cdot , y') \Vert _{L^2({\mathbb {R}})} \, \hbox {d}y'. \end{aligned} \end{aligned}$$
(4.14)

Hence, by Lemma 4.1, we have

$$\begin{aligned} J_2 \le \frac{C}{a} (\Vert \nabla m_2\Vert ^2_{L^2{(\Sigma )}} + \Vert m_2\Vert ^2_{L^2{(\Sigma )}} ). \end{aligned}$$
(4.15)

It is clear that the integrals \(J_1\) and \(J_3\) are similar. Therefore, we provide an estimate for \(J_1\) only. We write

$$\begin{aligned} \begin{aligned} J_1&=H_1+H_2\\ {}&:=\frac{1}{\varepsilon ^2} \int _{{\mathbb {R}}\times (0,\varepsilon )}\int _{{\mathbb {R}}\times (0, \varepsilon )} \eta '(y/\varepsilon ) m_2 (x,y) \, \eta '(y'/\varepsilon ) m_2(x,y') {\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}}') \, \, {\text{ d}^2 r \, \text{ d}^2 r'} \\ {}&+ \frac{1}{\varepsilon ^2}\int _{{\mathbb {R}}\times (0,\varepsilon )}\int _{{\mathbb {R}}\times (0, \varepsilon )} \eta '(y/\varepsilon ) m_2 (x,y) \, \eta '(y'/\varepsilon ) (m_2(x',y') \\&- m_2(x,y')){\mathcal {K}}_a({{\textbf {r}}} -{{\textbf {r}}}') \, {\text{ d}^2 r \, \text{ d}^2 r'}. \end{aligned} \end{aligned}$$
(4.16)

We now estimate \(H_2\) as follows:

$$\begin{aligned} H_2\le & {} \frac{C}{\varepsilon ^2} \int _0^\varepsilon \int _0^\varepsilon \int _{{\mathbb {R}}}\int _{{\mathbb {R}}} |m_2 (x,y)| {\hbox {e}^{-a|x-x'|}}\, \frac{ |m_2(x',y') - m_2(x,y')| }{|x-x'|}\, \hbox {d}x\, \hbox {d}x'\, \hbox {d}y\, \hbox {d}y'\nonumber \\\le & {} \frac{C}{\varepsilon ^2} \int _0^\varepsilon \int _0^\varepsilon \left( \int _{{\mathbb {R}}}\int _{{\mathbb {R}}} |m_2 (x,y)|^2 {\hbox {e}^{-2a|x-x'|}}\, \hbox {d}x\, \hbox {d}x'\right) ^{\frac{1}{2}} [m_2(\cdot , y')]_{{\mathring{H}}^{\frac{1}{2}}{({\mathbb {R}})}}\, \hbox {d}y\, \hbox {d}y'\nonumber \\\le & {} \frac{C}{\varepsilon ^2 {\sqrt{a}} } \int _0^\varepsilon \Vert m_2(\cdot , y)\Vert _{L^2{({\mathbb {R}})}}\, \hbox {d}y \int _0^\varepsilon [m_2(\cdot , y')]_{{\mathring{H}}^{\frac{1}{2}}{({\mathbb {R}})}}\, \hbox {d}y', \end{aligned}$$
(4.17)

where to obtain the second line we used Cauchy–Schwarz inequality. Using again Lemma 4.1 and Young’s inequality, from (4.17) we may conclude that

$$\begin{aligned} H_2\le \frac{C}{\sqrt{a}} (\Vert \nabla m_2\Vert ^2_{L^2{(\Sigma )}} + \Vert m_2\Vert ^2_{L^2{(\Sigma )}} ). \end{aligned}$$

Concerning \(H_1\), integrating first in \(x'\) and using Lemma 4.2, we get

$$\begin{aligned} H_1= & {} \frac{2}{\varepsilon ^2}\int _0^\varepsilon \int _0^\varepsilon \int _{{\mathbb {R}}} \eta '(y/\varepsilon ) m_2 (x,y) \, \eta '(y'/\varepsilon ) m_2(x,y') K_0(a|y-y'|)\,\hbox {d}x\, \hbox {d}y\, \hbox {d}y' \nonumber \\= & {} 2\int _0^1\int _0^1 \int _{{\mathbb {R}}} \eta '(y) m_2 (x,\varepsilon y) \, \eta '(y') m_2(x,\varepsilon y') K_0(a\varepsilon |y-y'|)\,\hbox {d}x\, \hbox {d}y\, \hbox {d}y'\nonumber \\= & {} 2 \int _0^1\int _0^1 \int _{{\mathbb {R}}} \eta '(y) \eta '(y') m^2_2 (x,0) \, K_0(a\varepsilon |y-y'|)\,\hbox {d}x\, \hbox {d}y\, \hbox {d}y'+2 H_{1,1}, \end{aligned}$$
(4.18)

where

$$\begin{aligned} H_{1,1}&:=\int _0^1\int _0^1 \int _{{\mathbb {R}}} \eta '(y) \eta '(y') (m_2 (x,\varepsilon y)-m_2 (x,0)) \, \nonumber \\&\quad (m_2 (x,\varepsilon y')+m_2 (x,0)) K_0(a\varepsilon |y-y'|)\,\hbox {d}x\, \hbox {d}y\, \hbox {d}y' \,. \end{aligned}$$
(4.19)

Note that for all \(\varepsilon \) sufficiently small and \(t \in (0,a\varepsilon )\) we have \(K_0(t) \le 2 |\ln (t)|\) and hence

$$\begin{aligned} \begin{aligned} H_{1,1}&\le 2 \int _0^1\int _0^1 \int _{{\mathbb {R}}} |m_2 (x,\varepsilon y)-m_2 (x,0)| \, |m_2 (x,\varepsilon y')+m_2 (x,0)| |\ln (a\varepsilon |y-y'|)|\, \,\text{ d }x\, \text{ d }y\, \text{ d }y' \\ {}&\le C|\ln (a\varepsilon )| \int _{{\mathbb {R}}} \left( \int _0^1 |m_2 (x,\varepsilon y)-m_2 (x,0)|^2\text{ d }y\right) ^{\frac{1}{2}} \\ {}&\qquad \left( \int _0^1 |m_2 (x,\varepsilon y')+m_2 (x,0)|^2\text{ d }y'\right) ^{\frac{1}{2}}\text{ d }x \\ {}&\quad + C\int _{{\mathbb {R}}} \left( \int _0^1 |m_2 (x,\varepsilon y)-m_2 (x,0)|^2 \, \,\text{ d }y\right) ^{\frac{1}{2}} \left( \int _0^1 |m_2 (x,\varepsilon y')+m_2 (x,0)|^2\text{ d }y'\right) ^{\frac{1}{2}}\hbox {d}x, \end{aligned} \end{aligned}$$
(4.20)

where for the last line we used Young’s inequality for convolutions. It is clear that for \(\varepsilon \) small enough we can absorb the expression in the last line to the expression in the second line above. Moreover, for a.e \(x \in {\mathbb {R}}\) we can estimate

$$\begin{aligned} \int _0^1 |m_2 (x,\varepsilon y)-m_2 (x,0)|^2 \, \,\hbox {d}y \le \varepsilon \int _0^1 |\nabla m_2(x,y)|^2\, \hbox {d}y \end{aligned}$$

and

$$\begin{aligned} \int _0^1 |m_2 (x,\varepsilon y')+m_2 (x,0)|^2 \, \,\hbox {d}y' \le C \left( |m_2(x,0)|^2 +\varepsilon \int _0^1 |\nabla m_2(x,y)|^2\, \hbox {d}y \right) . \end{aligned}$$

Therefore, using Cauchy–Schwarz and Young’s inequalities, by Lemma 4.1 we obtain

$$\begin{aligned} \begin{aligned} H_{1,1}&\le C |\ln (a\varepsilon )|\sqrt{\varepsilon }\int _{{\mathbb {R}}} \Vert \nabla m_2(x,\cdot )\Vert _{L^2(0,1)}\, \left( |m_2(x,0)|^2 +\varepsilon \Vert \nabla m_2(x,\cdot )\Vert _{L^2(0,1)}^2 \right) ^{\frac{1}{2}}\ \text{ d }x \\ {}&\le C |\ln (a\varepsilon )|\sqrt{\varepsilon } (\Vert \nabla m_2\Vert ^2_{L^2(\Sigma )} + \Vert m_2\Vert ^2_{L^2(\Sigma )}). \end{aligned} \end{aligned}$$
(4.21)

Now we note that, for \(\varepsilon \) small enough and \(t \in (0,a\varepsilon )\), we have \(|K_0(t) + \ln (t)| \le C \) and we get

$$\begin{aligned}{} & {} \left| \int _0^1\int _0^1 \int _{{\mathbb {R}}} \eta '(y) \eta '(y') m^2_2 (x,0) \, K_0(a\varepsilon |y-y'|)\,\hbox {d}x\, \hbox {d}y\, {\, \hbox {d}y'} \right. \left. - |\ln \varepsilon | \int _{\mathbb {R}}m^2_2 (x,0)\, \hbox {d}x \right| \\{} & {} \quad \le C(|\ln a| +1) (\Vert \nabla m_2\Vert ^2_{L^2{(\Sigma )}} + \Vert m_2\Vert ^2_{L^2{(\Sigma )}}). \end{aligned}$$

Finally, combining the above estimates we obtain (4.4), and we establish the proposition. \(\square \)

Corollary 4.5

Assume \(m_\varepsilon \in {\mathfrak {M}}\) and \(\limsup _{\varepsilon \rightarrow 0} E_\varepsilon (m_\varepsilon )<+\infty \). Then

  • \(\limsup _{\varepsilon \rightarrow 0} \Vert m_{2, \varepsilon }\Vert ^2_{L^2(\partial \Sigma )} < \infty \);

  • \(\limsup _{\varepsilon \rightarrow 0} \Vert m_{2, \varepsilon }\Vert ^2_{L^2(\Sigma )} < \infty \).

Proof

Using Proposition 4.4 with \(\beta =\frac{1}{2}\) and inequality (4.3), we have

$$\begin{aligned} E_\varepsilon (m_\varepsilon )&\ge {\frac{\gamma }{2}} \Big (\int _{\mathbb {R}}m^2_{2, \varepsilon } (x,0)\, \hbox {d}x + \int _{\mathbb {R}}m^2_{2, \varepsilon } (x,1)\, \hbox {d}x\Big ) \\&\quad - \frac{C \gamma }{|\ln \varepsilon |}(\Vert \nabla m_{ \varepsilon }\Vert ^2_{L^2{(\Sigma )}} + \Vert m_{2, \varepsilon }\Vert ^2_{L^2{(\Sigma )}}) \\&\ge {\frac{\gamma }{2}} \left( 1- \frac{2CC'}{|\ln \varepsilon |} \right) \Vert m_{2, \varepsilon }\Vert ^2_{L^2(\partial \Sigma )} - \frac{C {\gamma } (1+C')}{|\ln \varepsilon |}\Vert \nabla m_{\varepsilon }\Vert ^2_{L^2{(\Sigma )}}. \end{aligned}$$

Recalling that by our assumption \(\{\nabla m_\varepsilon \}\) is bounded in \(L^2 {(\Sigma )}\) independently of \(\varepsilon \), we obtain

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \Vert m_{2, \varepsilon }\Vert ^2_{L^2(\partial \Sigma )} < \infty . \end{aligned}$$

The second conclusion now follows again by (4.3). \(\square \)

We now prove the \(\liminf \) and \(\limsup \) inequalities for the magnetostatic energy term.

Proposition 4.6

Assume that \(m_\varepsilon \in {\mathfrak {M}}\) and that \(\limsup _{\varepsilon \rightarrow 0}E_\varepsilon (m_\varepsilon )<+\infty \). If \(m_\varepsilon \rightharpoonup m\) weakly in \(H^1_{l}(\Sigma ; {\mathbb {S}}^1)\) then

$$\begin{aligned} \begin{aligned} \liminf _{\varepsilon \rightarrow 0} \frac{1}{|\ln \varepsilon |} {\int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big (\mathrm{{div} (\eta _\varepsilon m)\big )}|^2}{2 \pi |{\textbf {k}}|} \, d^2 k}\!\ge \! 2 \int _{\mathbb {R}}m^2_2(x,0)\, \text{ d }x \ \!+\!\ 2 \int _{\mathbb {R}}m^2_2(x,1) \, \text{ d }x. \end{aligned}\nonumber \\ \end{aligned}$$
(4.22)

Moreover, for any \({m\in H^1_{l}(\Sigma ; {\mathbb {S}}^1)}\) with \(E_0 (m) <+\infty \) such that the set \(\{{\textbf{r}} \in \Sigma :\, {m_2}({\textbf{r}})\ne 0\}\) is essentially bounded we have

$$\begin{aligned} \begin{aligned} \limsup _{\varepsilon \rightarrow 0} \frac{1}{|\ln \varepsilon |} {\int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big (\mathrm{{div} (\eta _\varepsilon m)\big )}|^2}{2 \pi |{\textbf {k}}|} \, d^2 k} \!\le \! 2 \int _{\mathbb {R}}m^2_2(x,0)\, \text{ d }x \ \!+\!\ 2 \int _{\mathbb {R}}m^2_2(x,1) \, \text{ d }x. \end{aligned}\nonumber \\ \end{aligned}$$
(4.23)

Proof

Using Proposition 4.4, we can take the limit as \(\varepsilon \rightarrow 0\) in (4.4). Employing Corollary 4.5 and the fact that

$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0} \Big (\int _{\mathbb {R}}m^2_{2, \varepsilon } (x,0)\, \hbox {d}x \!+\! \int _{\mathbb {R}}m^2_{2, \varepsilon } (x,1)\, \hbox {d}x\Big )\!\ge \!{\textbf {}} \int _{\mathbb {R}}m^2_{2} (x,0)\, \hbox {d}x \!+\! \int _{\mathbb {R}}m^2_{2} (x,1)\, \hbox {d}x, \end{aligned}$$

we obtain

$$\begin{aligned} \begin{aligned} \frac{1}{|\ln \varepsilon |} {\int _{{\mathbb {R}}^2} \frac{|{\mathscr {F}}\big ({\text {div} (\eta _\varepsilon m)\big )}|^2}{2 \pi |{\textbf {k}}|} \, \text{ d}^2 k} \ge 2(1-\beta ) \Big (\int _{\mathbb {R}}m^2_{2} (x,0)\, \text{ d }x + \int _{\mathbb {R}}m^2_{2} (x,1)\, \text{ d }x\Big ). \end{aligned} \end{aligned}$$

Finally, taking the limit as \(\beta \rightarrow 0\) we obtain (4.22).

We are left with showing the second part of the statement. We note that by our assumptions on m there exists \(R> {1}\) such that

$$\begin{aligned} \{\nabla m\ne 0\}\subset Q_R, \end{aligned}$$
(4.24)

where \(Q_R:=(-R, R)\times (0,1) \subset \Sigma \). Moreover, since by assumption \(m_2 \in L^2(\Sigma )\), we have

$$\begin{aligned} m_2 = 0 \text { a.e. in } \Sigma {\setminus } Q_R. \end{aligned}$$
(4.25)

We start by splitting the magnetostatic energy as in (4.8), with \({\mathcal {K}}_a\) replaced by the original kernel \({\mathcal {K}}_0({\textbf{r}})=\frac{1}{|{\textbf{r}}|}\) after passing to the limit \(a \rightarrow 0\). With the same notation for \(I_1\), \(I_2\), \(I_3\) (and taking \(a \rightarrow 0\)), it is straightforward to see that the second inequality in (4.10) still holds.

Using the Young’s inequality for convolutions, we can estimate

$$\begin{aligned} \begin{aligned} I_1 \le \Vert {\mathcal {K}}_0\Vert _{L^1(\Sigma \cap Q_{2R})}\Vert {\text {div} \, m}\Vert ^2_{L^2(\Sigma )} {\le C \Vert \nabla m\Vert _{L^2(\Sigma )}^2} , \end{aligned} \end{aligned}$$
(4.26)

for some \(C > 0\) depending only on R. We now proceed by splitting \(I_3\) as

$$\begin{aligned} I_3 = J_1+2J_2+J_3, \end{aligned}$$

with the same notation as in (4.12) (and with \(a=0\)). Using (4.25), the estimate in (4.14) (with \(a=0\)) may be replaced by

$$\begin{aligned} J_2\le \frac{1}{\varepsilon ^2} \Bigl \Vert \tfrac{1}{\sqrt{|\cdot |^2+\frac{1}{2}}}\Bigr \Vert _{L^1({-2R, 2R})} \int _0^\varepsilon \Vert m_2 (\cdot , y) \Vert _{L^2{(\Sigma )}} \, \hbox {d}y\int _{1-\varepsilon }^1\Vert m_2 (\cdot , y') \Vert _{L^2{(\Sigma )}} \, \hbox {d}y', \end{aligned}$$

and by (4.2), we obtain

$$\begin{aligned} J_2\le {C} |\ln R| (\Vert \nabla m_2\Vert ^2_{L^2{(\Sigma )}} + \Vert m_2\Vert ^2_{L^2{(\Sigma )}} ). \end{aligned}$$
(4.27)

Taking into account (4.25), we can split \(J_1\) as

$$\begin{aligned} \begin{aligned} J_1&=H_1+H_2\\ {}&:=\frac{1}{\varepsilon ^2} \int _{(-R, R)\times (0,\varepsilon )}\int _{(-R, R)\times (0, \varepsilon )} {\eta '(y/\varepsilon ) m_2 (x,y) \, \eta '(y'/\varepsilon ) m_2(x,y') \over |{{\textbf {r}}} -{{\textbf {r}}}'|} \, \, {\text{ d}^2 r \, \text{ d}^2 r'} \\ {}&+ \frac{1}{\varepsilon ^2} \int _{(-R, R)\times (0,\varepsilon )}\int _{(-R, R)\times (0, \varepsilon )}{\eta '(y/\varepsilon ) m_2 (x,y) \, \eta '(y'/\varepsilon ) (m_2(x',y') - m_2(x,y')) \over |{{\textbf {r}}} -{{\textbf {r}}}'|} \, {\text{ d}^2 r \, \text{ d}^2 r'}. \end{aligned} \end{aligned}$$
(4.28)

We can estimate \(H_2\) as in (4.17), with \(a=0\) but taking advantage of the fact that (4.25) holds, to get

$$\begin{aligned} H_2\le \frac{C\sqrt{R}}{\varepsilon ^2} \int _0^\varepsilon \Vert m_2(\cdot , y)\Vert _{L^2({\mathbb {R}})}\, \hbox {d}y \int _0^\varepsilon [m_2(\cdot , y')]_{{\mathring{H}}^{\frac{1}{2}}{({\mathbb {R}})}}\, \hbox {d}y'. \end{aligned}$$

In turn, using (4.2) we obtain

$$\begin{aligned} H_2\le {C\sqrt{R}} (\Vert \nabla m_2\Vert ^2_{L^2{(\Sigma )}} + \Vert m_2\Vert ^2_{L^2{(\Sigma )}} ). \end{aligned}$$
(4.29)

Concerning \(H_1\), by integrating first with respect to \(x'\) over \((-R, R)\), we can argue similarly to (4.18) and write

$$\begin{aligned} \begin{aligned} H_1=&{} H_1'+ H_{1,1}:= \int _0^1\int _0^1 \int _{-R}^R \int _{-R}^R \, {} \frac{\eta '(y) \eta '(y') m^2_2 (x,0)\, \text{ d }x' \,\text{ d }x\, \text{ d }y\, \text{ d }y'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}}+ H_{1,1}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.30)

where \(H_{1,1}\) is defined as in (4.19), with \(K_0(a\varepsilon |y-y'|)\) replaced by \( \int _{-R}^R\) \(\frac{\hbox {d}x'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}}\) and with the integral in \(\hbox {d}x\) running over \((-R, R)\) instead of \({\mathbb {R}}\). Observe that

$$\begin{aligned} \int _{-R}^R\frac{\hbox {d}x'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}} \le 2\int _{0}^{2R}\frac{\hbox {d}s}{\sqrt{s^2+\varepsilon ^2|y-y'|^2}}. \end{aligned}$$

By computing explicitly the right-hand side, one can easily see that there exists a constant \(C=C(R)>0\) such that for \(\varepsilon \) small enough

$$\begin{aligned} \int _{-R}^R\frac{\hbox {d}x'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}} \le C|\ln (\varepsilon |y-y'|)|. \end{aligned}$$

With this estimate at hand, we can now argue similarly to (4.21) to obtain

$$\begin{aligned} H_{1,1}\le C |\ln \varepsilon |\sqrt{\varepsilon } (\Vert \nabla m_2\Vert ^2_{L^2} + \Vert m_2\Vert ^2_{L^2}). \end{aligned}$$
(4.31)

It remains to estimate \(H_1'\). To this aim, we observe that for any fixed \(\delta \in (0, R)\) we have

$$\begin{aligned} \int _{-R}^R\frac{\hbox {d}x'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}}\le & {} \int _{(-R, R)\cap \{|x-x'|>\delta \}}\frac{\hbox {d}x'}{\sqrt{|x-x'|^2+\varepsilon ^2|y-y'|^2}} \\{} & {} + 2\int _0^\delta \frac{\hbox {d}s}{\sqrt{s^2+\varepsilon ^2|y-y'|^2}} \\\le & {} \frac{2R}{\delta }+ 2\int _0^\delta \frac{\hbox {d}s}{\sqrt{s^2+\varepsilon ^2|y-y'|^2}}, \end{aligned}$$

from which we easily deduce that

$$\begin{aligned} \begin{aligned} H_1'&\le \frac{C(R)}{\delta }\Vert m_2(\cdot , 0)\Vert ^2_{L^2}\\ {}&+ 2 \int _0^1\int _0^1 \int _{-R}^R \eta '(y) \eta '(y') m^2_2 (x,0) \, \int _0^\delta \frac{\text{ d }s}{\sqrt{s^2+\varepsilon ^2|y-y'|^2}}\,\text{ d }x\, \text{ d }y\, \text{ d }y' \\ {}&\le \frac{C'(R)}{\delta }\Vert m_2(\cdot , 0)\Vert ^2_{L^2} \\ {}&+ 2 \int _0^1\int _0^1 \int _{-R}^R \eta '(y) \eta '(y') m^2_2 (x,0) |\ln (\varepsilon |y-y'|)| \text{ d }x\, \text{ d }y\, \text{ d }y'\\ {}&\le \Big (\frac{C'(R)}{\delta }+ {C''}\Big )\Vert m_2(\cdot , 0)\Vert ^2_{L^2}+ 2|\ln \varepsilon | \int _{-R}^R m^2_2 (x,0) \,\text{ d }x\,, \end{aligned} \end{aligned}$$
(4.32)

provided that \(\delta \) is sufficiently small. Note that the second inequality can be easily obtained by computing explicitly the innermost integral and by estimating the result (for \(\delta \) sufficiently small) with \( |\ln (\varepsilon |y-y'|)|+{\tilde{C}}\) for a suitable \({\tilde{C}}>0\), while the third inequality can be obtained by integrating out \(|\ln |y-y'||\). Combining (4.26) and (4.27)–(4.32) and the completely analogous estimates for \(J_3\), we obtain (4.23). \(\square \)

As a straightforward consequence of (4.4) in Proposition 4.4 we have the following corollary.

Corollary 4.7

For any \(M>0\), let \(\varepsilon _0>0\) be as in Proposition 4.4. Then, for all \(\varepsilon \in (0, \varepsilon _0)\) and \(m\in {\mathfrak {M}}\) such that \(E_\varepsilon (m) \le M\) we have

$$\begin{aligned} E_0(m)\le C {M}, \end{aligned}$$

for a suitable \(C>0\) independent of \(\varepsilon \), M and m.

Before analysing the asymptotic behavior of \(E_\varepsilon \) as \(\varepsilon \rightarrow 0\), let us show that for \(\varepsilon >0\) small enough \(E_\varepsilon \) admits a global minimizer in the class of magnetizations with nontrivial “winding”.

Proposition 4.8

There exists \(\varepsilon _1>0\) such that for all \(\varepsilon \in (0, \varepsilon _1)\) the following problem:

$$\begin{aligned}{} & {} \min \left\{ E_\varepsilon (m):\, m=(\cos \theta , \sin \theta )\in {{\mathfrak {M}}} \text { with }\theta \text { satisfying} (3.2) \text {for some }\right. \nonumber \\{} & {} \quad \left. k_1,\, k_2\in {\mathbb {Z}},\, k_1\ne k_2\right\} \end{aligned}$$
(4.33)

admits a solution.Footnote 1

Proof

Denote by \(i_\varepsilon \) the infimum of the problem in (4.33) and observe that

$$\begin{aligned} M:={1 + } \sup _{\varepsilon \in (0, \frac{1}{2})}i_\varepsilon <+\infty . \end{aligned}$$

Indeed, it is enough to consider a fixed test function \(m=(\cos \theta , \sin \theta )\in {{\mathfrak {M}}}\) such that the set \(\{{{\textbf{r}}} \in \Sigma :\, {m_2}({{\textbf{r}}})\ne 0\}\) is bounded and \(\theta \) satisfies the proper boundary conditions at infinity. By Proposition 4.6 we easily get

$$\begin{aligned} i_\varepsilon \le E_\varepsilon (m)\le C. \end{aligned}$$

Let \(\varepsilon _1:= \varepsilon _0({M})\), where \(\varepsilon _0({M})\) is as in in Proposition 4.4, and let \(\varepsilon \in (0, \varepsilon _1)\). If \(m_n=(\cos \theta _n, \sin \theta _n)\) is a minimizing sequence for (4.33), by Corollary 4.7 we have

$$\begin{aligned} F(\theta _n)\le C \end{aligned}$$

for every n large enough, with C independent of n. Set

$$\begin{aligned} {\bar{\theta }}_n(x):=\int _0^1\theta (x, y)\, \hbox {d}y. \end{aligned}$$
(4.34)

By shifting and flip** the \(\theta _n\)’s if needed, in view also of Lemma 3.1 we may assume that

$$\begin{aligned} \lim _{x\rightarrow -\infty }{\bar{\theta }}_n(x)=k_n\pi \text { and } \lim _{x\rightarrow +\infty }{\bar{\theta }}_n(x)=0 \end{aligned}$$

for some \(k_n\in {-{\mathbb {N}}}\).

Observe also that

$$\begin{aligned} \sup _{n}\int _{{\mathbb {R}}}|{\bar{\theta }}'_n|^2\, \hbox {d}x<+\infty \text { and } \sup _{n}\int _{\Sigma }|\nabla \theta _n|^2\, \hbox {d}x<+\infty . \end{aligned}$$

By replacing \(\theta _n\) with \(\theta _n(\cdot -\tau _n, \cdot )\), \(\bar{\theta }_n\) with \({\bar{\theta }}_n(\cdot -\tau _n)\) and not renaming the minimizing sequence, we can use the continuity of \({\bar{\theta }}_n(x)\) and conditions at infinity to make sure that \({\bar{\theta }}_n(0) =-\frac{\pi }{2}\). It follows that

$$\begin{aligned} |{\bar{\theta }}_n(x) - {\bar{\theta }}_n(0)| = \left| \int _0^x {\bar{\theta }}_n'(s) \, \hbox {d}s \right| \le \sqrt{x}\, \Vert {\bar{\theta }}_n' \Vert _{L^2({\mathbb {R}})} \le C \sqrt{x}. \end{aligned}$$

Therefore, we have that \({\bar{\theta }}_n\) is bounded in \(L^2_{loc} ({\mathbb {R}})\). Employing the Poincare inequality we deduce that \(\theta _n\) is bounded in \(L^2_{loc}(\Sigma )\). Thus we may apply [20, Lemma 1] to deduce that there exists \(\theta _\infty \in H^1_{l}(\Sigma )\) and a subsequence (not relabelled) such that \(\theta _n\rightharpoonup \theta _\infty \) weakly in \(H^1_{l}(\Sigma )\), \({\bar{\theta }}_n\rightharpoonup {\bar{\theta }}_\infty \) weakly in \(H^1_{l}({\mathbb {R}})\), and

$$\begin{aligned} {\bar{\theta }}_\infty (0) = -{\pi \over 2}, \quad \limsup _{x\rightarrow -\infty }{\bar{\theta }}_\infty (x)\le -\frac{\pi }{2}\quad \text { and } \quad \liminf _{x\rightarrow +\infty }{\bar{\theta }}_\infty (x)\ge -\frac{\pi }{2}, \end{aligned}$$
(4.35)

Furthermore, testing \(\theta _n\) with \(\phi (x, y) = \psi (x)\), where \(\psi \in C^\infty _c({\mathbb {R}})\), and passing to the limit, it is easy to see that \({\bar{\theta }}_\infty (x) = \int _0^1 \theta _\infty (x, y) \, \hbox {d}y\) for a.e. \(x \in {\mathbb {R}}\). In addition, using weak lower semicontinuity of the energy F we also have

$$\begin{aligned} F(\theta _\infty )\le C. \end{aligned}$$

In turn, by Lemma 3.1 there exist \(j_1, j_2\in {\mathbb {Z}}\) such that \(\theta _\infty \) satisfies (3.2), with \(k_1\), \(k_2\) replaced by \(j_1\), \(j_2\), respectively, and

$$\begin{aligned} \lim _{x\rightarrow -\infty }{\bar{\theta }}_\infty (x)=j_1\pi , \qquad \lim _{x\rightarrow +\infty }{\bar{\theta }}_\infty (x)=j_2\pi . \end{aligned}$$

Taking into account (4.35), it is clear that \(j_1\ne j_2\). It is now easy to check that \(m_\infty :=(\cos \theta _\infty , \sin \theta _\infty )\) is a solution to (4.33). \(\square \)

We are now ready to state the main \(\Gamma \)-convergence result showing that (2.16) is the limiting energy of (2.10).

Theorem 4.9

Let \(\gamma > 0\) and \(h \ge 0\), and let \(E_\varepsilon \) and \(E_0\) be defined by (2.13) and (2.16), respectively, on \(L^2_{loc}(\Sigma ; \mathbb S^2)\). Then the following two statements are true:

  1. (i)

    (\(\Gamma \)-\(\liminf \) inequality) Let \(m_\varepsilon \in {\mathfrak {M}}\) and \(m_\varepsilon \rightarrow m\) strongly in \(L^2_{loc}(\Sigma ; {{\mathbb {R}}^2})\) as \(\varepsilon \rightarrow 0\). Then

    $$\begin{aligned} \liminf _{\varepsilon \rightarrow 0} E_\varepsilon (m_\varepsilon )\ge E_0(m). \end{aligned}$$
    (4.36)
  2. (ii)

    (\(\Gamma \)-\(\limsup \) inequality) Let \(m\in H^1_{l}(\Sigma ; {\mathbb {S}}^1)\) be such that \(E_0(m)<+\infty \). Then there exists \(m_\varepsilon \in {\mathfrak {M}}\) such that \(m_\varepsilon \rightarrow m\) in \(L^2_{loc}(\Sigma ; {\mathbb {R}}^2)\) as \(\varepsilon \rightarrow 0\) and

    $$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} E_\varepsilon (m_\varepsilon )\le E_0(m). \end{aligned}$$

    Furthermore, if \(\theta \) and \(\theta _\varepsilon \) are such that \(m = (\cos \theta , \sin \theta )\) and \(m_\varepsilon = (\cos \theta _\varepsilon , \sin \theta _\varepsilon )\), then for every \(\varepsilon \) sufficiently small we have

    $$\begin{aligned} \lim _{x\rightarrow - \infty }\Vert \theta _\varepsilon (x,\cdot )- k_1 \pi \Vert _{L^{2}(0,1)}=0 \quad \text {and} \quad \lim _{x\rightarrow - \infty }\Vert \theta _\varepsilon (x,\cdot )- k_2 \pi \Vert _{L^{2}(0,1)}=0, \end{aligned}$$

    where \(k_1, k_2 \in {\mathbb {Z}}\) are as in (3.2).

Proof

Let us first prove the \(\Gamma \)-liminf inequality. If \(\liminf _{\varepsilon \rightarrow 0} E_\varepsilon (m_\varepsilon )=+\infty \) there is nothing to prove. Hence we may assume without loss of generality that (after passing to a subsequence)

$$\begin{aligned} \liminf _{\varepsilon {\rightarrow 0}} E_\varepsilon (m_\varepsilon )=\lim _{\varepsilon {\rightarrow 0}} E_\varepsilon (m_\varepsilon )<+\infty . \end{aligned}$$

Then, in particular, \({\lim \sup }_{\varepsilon {\rightarrow 0}} \Vert \nabla m_\varepsilon \Vert _{L^2 {(\Sigma )}}<+\infty \) and thus \(m_\varepsilon \rightharpoonup m \in H^1_{l}(\Sigma ; {\mathbb {S}}^1)\) weakly in \(H^1_{l}(\Sigma ; {{\mathbb {R}}^2})\). Inequality (4.36) then follows from the Proposition 4.6 (see (4.22)) and from the lower semicontinuity of the local terms in the energies.

Let us now establish the upper bound. Let m and \(\theta \) be as in the second part of the statement. Then by Lemma 4.1 we have \(m \in {\mathfrak {M}}\), and by Lemma 3.1 there exists \(k_1, k_2 \in {\mathbb {Z}}\) such that (3.2) holds true. Now, arguing as in the proof of (3.17) one can construct a sequence \(\{\theta _n\}\) with the following properties:

  1. (i)

    for every n there exists \(M_n>0\) such that

    $$\begin{aligned} \theta _n(x, y)= {k_1 \pi \text { if } x\le -M_n \text { and } \theta _n(x, y)=k_2 \pi \text { if } x\ge M_n,} \end{aligned}$$
  2. (ii)

    \(m_n\rightarrow m \in L^2_{loc}(\Sigma ; {\mathbb {R}}^2)\),

  3. (iii)

    setting \(m_n:=(\cos \theta _n, \sin \theta _n)\), we have

    $$\begin{aligned} E_0(m_n)=F(\theta _n)\rightarrow E_0(m)=F(\theta )\text { as }n\rightarrow \infty . \end{aligned}$$

Therefore, by a standard diagonal argument it is enough to prove the upper bound under the following additional assumption: there exists \(M>0\) such that

$$\begin{aligned} {\theta (x, y)= k_1 \pi \text { if } x\le -M \text { and } \theta (x, y)=k_2 \pi \text { if } x\ge M.} \end{aligned}$$

Under such an assumption, the conclusion follows simply by taking \(m_\varepsilon = m\) for all \(\varepsilon \) and observing that

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0}E_\varepsilon (m)\le E_0(m), \end{aligned}$$

thanks to Proposition 4.6 [see (4.23)]. \(\square \)

Corollary 4.10

Let \(k \in {\mathbb {Z}}{\setminus }\{0\}\). Then

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \inf _{{m \in {\mathcal {A}}_k^0}} E_\varepsilon ({m}) = \inf _{\theta \in {\mathcal {A}}_k} F (\theta ), \end{aligned}$$

where \({\mathcal {A}}_k^0:= \{ m \in {\mathfrak {M}}\,: \, m = (\cos \theta , \sin \theta ), \ \theta \in {\mathcal {A}}_k \}\).

Proof

For simplicity of the presentation we provide the proof for \(k \in {\mathbb {N}}\) only. Using Theorem 4.9, we know that for any fixed \({\bar{\theta }} \in {\mathcal {A}}_k\) such that \(F(\bar{\theta })<+\infty \) we may find \(m_\varepsilon = (\cos \theta _\varepsilon , \sin \theta _\varepsilon ) \in {\mathcal {A}}_k^0\) such that \(\theta _\varepsilon \rightharpoonup \bar{\theta }\) weakly in \(H^1_{l}(\Sigma )\) and \(E_\varepsilon ({m_\varepsilon }) \rightarrow F(\bar{\theta })\). Thus,

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \inf _{m \in {\mathcal {A}}_k^0} E_\varepsilon ({m}) \le \lim _{\varepsilon \rightarrow 0} E_\varepsilon ({m_\varepsilon }) = F({\bar{\theta }}). \end{aligned}$$

By the arbitrariness of \({\bar{\theta }}\in {\mathcal {A}}_k\), we obtain

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \inf _{\theta \in {\mathcal {A}}_k} E_\varepsilon (\cos \theta , \sin \theta ) \le \inf _{\theta \in {\mathcal {A}}_k} F(\theta ). \end{aligned}$$

For the reverse inequality, let \(m_\varepsilon = (\cos \theta _\varepsilon , \sin \theta _\varepsilon ) \in {\mathcal {A}}_k^{0}\) be a sequence such that

$$\begin{aligned} \lim _{{\varepsilon \rightarrow 0}} E_{\varepsilon }({m_\varepsilon }) = \liminf _{\varepsilon \rightarrow 0}\inf _{m \in {\mathcal {A}}_k^0} E_\varepsilon ({m}). \end{aligned}$$
(4.37)

Then for \(\varepsilon \) small enough we may use Corollary 4.5 to get

$$\begin{aligned} {\limsup _{\varepsilon \rightarrow 0}} \big (\Vert \nabla \theta _{\varepsilon }\Vert _{L^2(\Sigma )} + \Vert \sin \theta _{\varepsilon }\Vert _{L^2(\Sigma )} + \Vert \sin \theta _{\varepsilon }\Vert _{L^2(\partial \Sigma )}\big )<+\infty . \end{aligned}$$

On the other hand, for any \(\beta \in (0,1)\), using inequality (4.4) from Proposition 4.4 and denoting by C a positive constant independent of j and \(\beta \) (that may change from inequality to inequality) we have

$$\begin{aligned} E_{\varepsilon }({m_\varepsilon })\ge & {} F(\theta _{\varepsilon }) - \frac{C}{\beta |\ln {\varepsilon }| } \left( \Vert \nabla \theta _{\varepsilon }\Vert ^2_{L^2(\Sigma )} + \Vert \sin \theta _{\varepsilon }\Vert ^2_{L^2(\Sigma )} \right) - C \beta \Vert \sin \theta _{\varepsilon } \Vert ^2_{L^2 (\partial \Sigma )} \\\ge & {} \inf _{\theta \in {\mathcal {A}}_k} F(\theta ) - \frac{C}{\beta |\ln {\varepsilon }| } - C \beta . \end{aligned}$$

Taking the limit as \({\varepsilon \rightarrow 0}\) and recalling (4.37), we obtain

$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0} \inf _{m \in {\mathcal {A}}_k^0} E_\varepsilon ({m}) \ge \inf _{\theta \in {\mathcal {A}}_k} F(\theta ) - C \beta . \end{aligned}$$

The conclusion then follows from the arbitrariness of \(\beta \). \(\square \)

Corollary 4.11

Let \(\varepsilon \rightarrow 0\) and let \({\{m_\varepsilon \}}\) be a sequence of minimizers for problem (4.33). Then, after suitable translations in the x-variable and up to a subsequence (not relabelled), we have \(m_\varepsilon \rightarrow m_0 \in H^1_{l}(\Sigma ; \mathbb S^1)\) strongly in \(H^1_{l}(\Sigma {; {\mathbb {R}}^2})\), where \(m_0\) is a solution to (2.17). Moreover,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} E_\varepsilon (m_\varepsilon ) = E_0(m_0). \end{aligned}$$

Proof

Note that by Corollary 4.5 we have

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} E_0(m_\varepsilon )<+\infty . \end{aligned}$$
(4.38)

In turn, by Lemma 3.1, without loss of generality, we may associate to each \(m_\varepsilon \) a phase function \(\theta _\varepsilon \) satisfying (3.7).

We may now argue exactly as in Proposition 4.8 (with \(\theta _\varepsilon \) in place of \(\theta _n\)) to deduce the existence of \(\theta _0\in H^1_{l}(\Sigma )\) and of \(j_1\), \(j_2\in {\mathbb {Z}}\), \(j_1\ne j_2\) such that (3.2) holds with \(\theta \), \(k_1\), \(k_2\) replaced by \(\theta _0\), \(j_1\), \(j_2\), respectively, and

$$\begin{aligned} \theta _\varepsilon \rightharpoonup \theta _0\quad \text {weakly in }H^1_{l}(\Sigma ), \end{aligned}$$

up to a subsequence (not relabelled). Set now \(m_0:=(\cos \theta _0, \sin \theta _0)\). The fact that \(m_0\) is a solution of (2.17) and the convergence of energies follows from a standard \(\Gamma \)-convergence argument in view of Theorem 4.9. In turn, the convergence of energies implies strong convergence of \(m_\varepsilon \) in \(H^1_{l}(\Sigma ; {\mathbb {R}}^2)\). \(\square \)

Corollary 4.11 combined with Corollaries 3.10, 3.3 and 4.10 easily yields that for \(\varepsilon \) small enough the minimization in (4.33) is achieved by at most single winding. Precisely, we have:

Corollary 4.12

There exists \(\varepsilon _1>0\) such that for \(\varepsilon \in (0, \varepsilon _1)\) any minimizer \(m_\varepsilon =(\cos \theta _\varepsilon ,\sin \theta _\varepsilon )\) of (4.33) is such that \(\theta _\varepsilon \) satisfies (3.2) for some \(k_1(\varepsilon ), k_2(\varepsilon )\in {\mathbb {Z}}\), with \(|k_1(\varepsilon )-k_2(\varepsilon )|=k\), where \(k=1\) if \(h=0\) or \(k=2\) if \(h>0\). Moreover, after suitable translations we have

$$\begin{aligned} \text {sgn}(k_1(\varepsilon ) - k_2(\varepsilon )) (\theta _\varepsilon - k_2(\varepsilon ) \pi ) \rightarrow \theta _{min} \ \text {strongly in} \ H^1_{l}(\Sigma ) \ \text {as} \ \varepsilon \rightarrow 0, \end{aligned}$$
(4.39)

where \(\theta _{min}\) is the unique (up to translations) minimizer from Theorem 2.3.

Proof

We provide a proof for \(h=0\) only. Let \(\varepsilon >0\) be small enough and \(m_\varepsilon =(\cos \theta _\varepsilon , \sin \theta _\varepsilon )\) be a minimizer of (4.33). Using Corollary 4.7 and Lemma 3.1, we know that there exist \(k_1(\varepsilon ), k_2(\varepsilon ) \in {\mathbb {N}}\) such that

$$\begin{aligned} \lim _{x\rightarrow -\infty }\Vert \theta _\varepsilon (x,\cdot )-k_1(\varepsilon )\pi \Vert _{L^{2}(0,1)}=0 \text { and } \lim _{x\rightarrow +\infty }\Vert \theta _\varepsilon (x,\cdot )-k_2(\varepsilon )\pi \Vert _{L^{2}(0,1)}\!=\!0. \nonumber \\ \end{aligned}$$
(4.40)

Employing Corollary 4.11, we also know that (after a suitable translation) \(m_\varepsilon \rightarrow m_0\) strongly in \(H^1_{l} (\Sigma )\) for a subsequence of \(\varepsilon \rightarrow 0\), where \(m_0\) is a minimizer of (2.17). We want to show that \(|k_1(\varepsilon ) - k_2(\varepsilon )| \rightarrow 1\). Assume this is not the case, then there exists a further subsequence \(\varepsilon _k \rightarrow 0\) such that either: (a) \(|k_1(\varepsilon _k) - k_2(\varepsilon _k)| \rightarrow n \in {\mathbb {Z}}_+ {\setminus } \{1\}\) or (b) \(|k_1(\varepsilon _k) - k_2(\varepsilon _k)| \rightarrow \infty \).

In case (a), we see that there exists \(\varepsilon _1>0\) such that for all \(\varepsilon _k <\varepsilon _1\) we have \(|k_1(\varepsilon _k) - k_2(\varepsilon _k)| = n\) and therefore (after a suitable shift of \(\theta _{\varepsilon _k}\) by \(k_2(\varepsilon _k) \pi \)) we obtain \(m_{\varepsilon _k} (\cos \theta _{\varepsilon _k}, \sin \theta _{\varepsilon _k})\) with \(\theta _{\varepsilon _k} \in {\mathcal {A}}_n\). Since \(m_{\varepsilon _k}\) minimizes (4.33), we cannot have \(n=0\). Furthermore, since \(m_{\varepsilon _k}\) is a minimizer of (4.33) we obtain \(\lim _{k \rightarrow \infty } E_{\varepsilon _k}(m_{\varepsilon _k}) = \lim _{k \rightarrow \infty } \inf _{m \in {\mathcal {A}}_n} E_{\varepsilon _k} (m)\). Using Corollaries 4.10 and 3.3, we obtain that

$$\begin{aligned} \lim _{k \rightarrow \infty } E_{\varepsilon _k}(m_{\varepsilon _k}) = \inf _{\theta \in \mathcal A_n} F(\theta )= n F(\theta _{min})> F(\theta _{min})>0, \end{aligned}$$

contradicting the convergence of energies in Corollary 4.11.

In case (b), we assume without loss of generality that \(k_2(\varepsilon )=0\) and \(k_1(\varepsilon )>0\). We note that since \(k_1(\varepsilon ) \rightarrow \infty \), the function \({\bar{\theta }}_{\varepsilon _k} = \min \{ n \pi , \theta _{\varepsilon _k}\}\) for any fixed \(n \in {\mathbb {N}}\) yields \(F(\theta _{\varepsilon _k}) \ge F(\bar{\theta }_{\varepsilon _k})\). Using Corollary 4.11, we know that \(E_{\varepsilon _k} (m_{\varepsilon _k})<C\) and therefore by employing Proposition 4.4 and Corollary 4.5 we obtain

$$\begin{aligned} \liminf _{k \rightarrow \infty } E_{\varepsilon _k} (m_{\varepsilon _k}) \ge (1-\beta ) \liminf _{k \rightarrow \infty } F(\theta _{\varepsilon _k}) \ge (1-\beta ) \liminf _{k \rightarrow \infty } F({\bar{\theta }}_{\varepsilon _k}) \end{aligned}$$

for any \(\beta \in (0,1)\). Finally, taking \(n=2\) and noting that \({\bar{\theta }}_{\varepsilon _k} \in {\mathcal {A}}_n\) we deduce, using Corollary 3.3, that \(\liminf _{k \rightarrow \infty } E_{\varepsilon _k} (m_{\varepsilon _k}) \ge 2(1-\beta ) F(\theta _{min}) > F(\theta _{min})\) for \(\beta \) small enough, and again we have a contradiction with the convergence of energies in Corollary 4.11.

Finally, strong convergence of \(\text {sgn}(k_1(\varepsilon ) - k_2(\varepsilon )) (\theta _\varepsilon - k_2(\varepsilon ) \pi )\) to \(\theta _{min}\) in \(H^1_{l}(\Sigma )\) follows from Corollary 4.11 and uniqueness of the minimizer of the limit problem. \(\square \)

Proof of Theorem 2.9

To conclude, the assertion of Theorem 2.9 is an immediate consequence of Corollary 4.12. \(\square \)