Abstract
In this paper, we present the \(\hbox {L}^2\)-norm stability analysis and error estimate for the explicit single-step time-marching discontinuous Galerkin (DG) methods with stage-dependent numerical flux parameters, when solving a linear constant-coefficient hyperbolic equation in one dimension. Two well-known examples of this method include the Runge–Kutta DG method with the downwind treatment for the negative time marching coefficients, as well as the Lax–Wendroff DG method with arbitrary numerical flux parameters to deal with the auxiliary variables. The stability analysis framework is an extension and an application of the matrix transferring process based on the temporal differences of stage solutions, and a new concept, named as the averaged numerical flux parameter, is proposed to reveal the essential upwind mechanism in the fully discrete status. Distinguished from the traditional analysis, we have to present a novel way to obtain the optimal error estimate in both space and time. The main tool is a series of space–time approximation functions for a given spatial function, which preserve the local structure of the fully discrete schemes and the balance of exact evolution under the control of the partial differential equation. Finally some numerical experiments are given to validate the theoretical results proposed in this paper.
Data Availability
The datasets generated during the current study are available from the corresponding author upon reasonable request.
References
Ai, J., Xu, Y., Shu, C.W., Zhang, Q.: \({\rm L}^2\) error estimate to smooth solutions of high order Runge–Kutta discontinuous Galerkin method for scalar nonlinear conservation laws with and without sonic points. SIAM J. Numer. Anal. 60(4), 1741–1773 (2022). https://doi.org/10.1137/21M1435495
Chavent, G., Cockburn, B.: The local projection \(P^0P^1\)-discontinuous-Galerkin finite element method for scalar conservation laws. RAIRO Modél. Math. Anal. Numér. 23(4), 565–592 (1989). https://doi.org/10.1051/m2an/1989230405651
Cheng, Y., Meng, X., Zhang, Q.: Application of generalized Gauss–Radau projections for the local discontinuous Galerkin method for linear convection–diffusion equations. Math. Comp. 86(305), 1233–1267 (2017). https://doi.org/10.1090/mcom/3141
Ciarlet, P.G.: The finite element method for elliptic problems. In: Studies in Mathematics and its Applications, vol. 4. North-Holland Publishing Co., New York (1978)
Cockburn, B., Hou, S., Shu, C.W.: The Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. IV. The multidimensional case. Math. Comput. 54(190), 545–581 (1990). https://doi.org/10.2307/2008501
Cockburn, B., Lin, S.Y., Shu, C.W.: TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. III. One-dimensional systems. J. Comput. Phys. 84(1), 90–113 (1989). https://doi.org/10.1016/0021-9991(89)90183-6
Cockburn, B., Shu, C.W.: TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. II. General framework. Math. Comput. 52(186), 411–435 (1989). https://doi.org/10.2307/2008474
Cockburn, B., Shu, C.W.: The Runge–Kutta local projection \(P^1\)-discontinuous-Galerkin finite element method for scalar conservation laws. RAIRO Modél. Math. Anal. Numér. 25(3), 337–361 (1991). https://doi.org/10.1051/m2an/1991250303371
Cockburn, B., Shu, C.W.: The Runge–Kutta discontinuous Galerkin method for conservation laws. V. Multidimensional systems. J. Comput. Phys. 141(2), 199–224 (1998). https://doi.org/10.1006/jcph.1998.5892
Cockburn, B., Shu, C.W.: Runge–Kutta discontinuous Galerkin methods for convection-dominated problems. J. Sci. Comput. 16(3), 173–261 (2001). https://doi.org/10.1023/A:1012873910884
Gottlieb, S., Ruuth, S.J.: Optimal strong-stability-preserving time-step** schemes with fast downwind spatial discretizations. J. Sci. Comput. 27(1–3), 289–303 (2006). https://doi.org/10.1007/s10915-005-9054-8
Gottlieb, S., Shu, C.W.: Total variation diminishing Runge–Kutta schemes. Math. Comput. 67(221), 73–85 (1998). https://doi.org/10.1090/S0025-5718-98-00913-2
Guo, W., Qiu, J., Qiu, J.: A new Lax–Wendroff discontinuous Galerkin method with superconvergence. J. Sci. Comput. 65(1), 299–326 (2015). https://doi.org/10.1007/s10915-014-9968-0
Liu, Y., Shu, C.W., Zhang, M.: Sub-optimal convergence of discontinuous Galerkin methods with central fluxes for linear hyperbolic equations with even degree polynomial approximations. J. Comput. Math. 39(4), 518–537 (2021). https://doi.org/10.4208/jcm.2002-m2019-0305
Qiu, J., Zhang, Q.: Stability, error estimate and limiters of discontinuous Galerkin methods. In: Handbook of Numerical Methods for Hyperbolic Problems, Handbook of Numerical Analysis, vol. 17, pp. 147–171. Elsevier, Amsterdam (2016). https://doi.org/10.1016/bs.hna.2016.06.001
Ruuth, S.J.: Global optimization of explicit strong-stability-preserving Runge–Kutta methods. Math. Comput. 75(253), 183–207 (2006). https://doi.org/10.1090/S0025-5718-05-01772-2
Ruuth, S.J., Spiteri, R.J.: Two barriers on strong-stability-preserving time discretization methods. J. Sci. Comput. 17(1–4), 211–220 (2002). https://doi.org/10.1023/A:1015156832269
Ruuth, S.J., Spiteri, R.J.: High-order strong-stability-preserving Runge–Kutta methods with downwind-biased spatial discretizations. SIAM J. Numer. Anal. 42(3), 974–996 (2004). https://doi.org/10.1137/S0036142902419284
Shu, C.W.: Total-variation-diminishing time discretizations. SIAM J. Sci. Stat. Comput. 9(6), 1073–1084 (1988). https://doi.org/10.1137/0909073
Shu, C.W.: Discontinuous Galerkin methods: general approach and stability. In: Numerical Solutions of Partial Differential Equations, Adv. Courses Math. CRM Barcelona, pp. 149–201. Birkhäuser, Basel (2009)
Shu, C.W.: Discontinuous Galerkin methods for time-dependent convection dominated problems: basics, recent developments and comparison with other methods. In: Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, Lecture Notes in Computer Science Engineering, vol. 114, pp. 369–397. Springer, New York (2016)
Shu, C.W., Osher, S.: Efficient implementation of essentially nonoscillatory shock-capturing schemes. J. Comput. Phys. 77(2), 439–471 (1988). https://doi.org/10.1016/0021-9991(88)90177-5
Sun, Z., Shu, C.W.: Stability analysis and error estimates of Lax–Wendroff discontinuous Galerkin methods for linear conservation laws. ESAIM Math. Model. Numer. Anal. 51(3), 1063–1087 (2017). https://doi.org/10.1051/m2an/2016049
Sun, Z., Shu, C.W.: Strong stability of explicit Runge–Kutta time discretizations. SIAM J. Numer. Anal. 57(3), 1158–1182 (2019). https://doi.org/10.1137/18M122892X
Van Loan, C.F.: The ubiquitous Kronecker product. J. Comput. Appl. Math. 123(1–2), 85–100 (2000). https://doi.org/10.1016/S0377-0427(00)00393-9
Xu, Y., Meng, X., Shu, C.W., Zhang, Q.: Superconvergence analysis of the Runge–Kutta discontinuous Galerkin methods for a linear hyperbolic equation. J. Sci. Comput. 84, 23 (2020). https://doi.org/10.1007/s10915-020-01274-1
Xu, Y., Shu, C.W., Zhang, Q.: Error estimate of the fourth-order Runge–Kutta discontinuous Galerkin methods for linear hyperbolic equations. SIAM J. Numer. Anal. 58(5), 2885–2914 (2020). https://doi.org/10.1137/19M1280077
Xu, Y., Zhang, Q.: Superconvergence analysis of the Runge–Kutta discontinuous Galerkin method with upwind-biased numerical flux for two dimensional linear hyperbolic equation. Commun. Appl. Math. Comput. 4, 319–352 (2022). https://doi.org/10.1007/s42967-020-00116-z
Xu, Y., Zhang, Q., Shu, C.W., Wang, H.: The \(\text{ L}^2\)-norm stability analysis of Runge–Kutta discontinuous Galerkin methods for linear hyperbolic equations. SIAM J. Numer. Anal. 57(4), 1574–1601 (2019). https://doi.org/10.1137/18M1230700
Xu, Y., Zhao, D., Zhang, Q.: Local error estimates for Runge–Kutta discontinuous Galerkin methods with upwind-biased numerical fluxes for a linear hyperbolic equation in one-dimension with discontinuous initial data. J. Sci. Comput. 91, 11 (2022). https://doi.org/10.1007/s10915-022-01793-z
Zhang, Q., Shu, C.W.: Error estimates to smooth solutions of Runge–Kutta discontinuous Galerkin methods for scalar conservation laws. SIAM J. Numer. Anal. 42(2), 641–666 (2004). https://doi.org/10.1137/S0036142902404182
Zhang, Q., Shu, C.W.: Stability analysis and a priori error estimates of the third order explicit Runge–Kutta discontinuous Galerkin method for scalar conservation laws. SIAM J. Numer. Anal. 48(3), 1038–1063 (2010). https://doi.org/10.1137/090771363
Funding
Yuan Xu is supported by NSFC Grant 12301513, Natural Science Foundation of Jiangsu Province Grant BK20230374 and Natural Science Foundation of Jiangsu Higher Education Institutions of China Grant 23KJB110019. Chi-Wang Shu is supported by NSF grant DMS-2309249. Qiang Zhang is supported by NSFC Grant 12071214.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this section we give some supplemental materials for those conclusions unproved in Sect. 3. This process involves many notations and manipulations of matrices.
To do that, we give some elemental notations here. Associated with the multistep number m and the stage number s, we introduce some column vectors and square matrices of size ms, whose component is either 0 or 1. More specifically, we denote \(\varvec{1}(m,s)=(1,1,\ldots ,1)^\top \) and let \(\varvec{e}_i(m,s)\), for \(0\le i\le ms-1\), be the unit vector which has 1 at the i-th position. Let \(\underline{\varvec{I}}(m,s)\) be the identity matrix and \(\underline{\varvec{E}}(m,s)\) be the shifting matrix which has 1 at the lower second diagonal line. Then we define
which has 1 at the strictly lower region. For simplicity of notations, we would like to denote, for example
This notation rule will be used throughout the entire section.
1.1 Matrix Description of the Ultimate Spatial Matrix
In this subsection we devote to presenting a matrix description of how to get the ultimate spatial matrix. To do that, we define the ms order matrices
related to the description of the ESTDG method, and
related to the definition of temporal differences of stage solutions. Here all indices i and j are taken from 0 to \(ms-1\), and \(\vartheta \) is the parameter as mentioned in subsection 3.1.1. We would like to remark that all data in the above matrices are set to be zero, if they are not clearly stated or defined.
1.1.1 Elemental Formula
The ultimate spatial matrix is obtained by running Algorithm 1 for \(\ell =1,2,\ldots ,\zeta \), where the crucial calculation is the increment accumulation in Step 2.
To do that, we define a lower triangle matrix \(\underline{\varvec{A}}^\star (m)=\{a_{ij}^\star (m)\}_{0\le i,j\le ms-1}\), whose entries are all defined to be zero except
Since \(\{\tilde{q}_{ij}(m)\}_{0\le i,j\le ms-1}\) is a lower triangle matrix, all summation ranges in Step 2 can be enlarged to \(\{0,1,\ldots ,ms-1\}\). Gathering up the related operation till the matrix transferring process stops, we can obtain a unified description for the increment procedure at any fixed position. More specifically, the integrated calculation at every \((i',j')\) position reads (drop** (m) here for convenience)
where \(i',j'\) and \(\kappa '\) go through \(\{0,1,\ldots ,ms-1\}\). As a result, the total increment at Step 2 of Algorithm 1 can be expressed in the matrix form
where the definition (3.16), i.e., \(\tilde{q}_{ij}(m)=q_{ij}(m;\vartheta )+\vartheta \delta _{ij}\) is used.
From Step 3 of Algorithm 1, we have the ultimate spatial matrix (below the last row and column is dropped, since they are always zero)
with the symmetric matrix
The entry at the lower triangular zone is defined as
as the same as in the ultimate spatial matrix in [26] for the RKDG methods with a fixed numerical flux parameter.
To investigate the property of the second term in (7.3), we just need to study the perturbation matrix
Taking into account on definition of the contribution index, we only pay attention on those left-top entries in (7.6). In what follows we try to deduce a convenient and unified formula for
The formula of every \(b_{i\ell }^{\star }(m)\) has been given in [26], but is variant according to the size relationship of i and \(\ell \). In this paper we have to rebuild an equivalent and unified formula, as stated in the next lemma.
Lemma 7.1
For \(0\le i\le \zeta -1\), there holds
Here and below we define \(\alpha _{i'}(m)=0\) if \(i'>ms\) for simplicity.
We put aside the proof of this lemma in Sect. 7.1.3. Substituting (7.8) into (7.7) deduces for any \(0\le i,j\le \zeta -1\) that
In what follows we would like to set up a useful formula of \(\pi _{\kappa ,j}(m;\vartheta )\) by those data to define the ESTDG method.
1.1.2 Formula of \(\pi _{\kappa ,j}(m;\vartheta )\)
Due to (3.13) and (3.8), we can respectively obtain
This implies \(\underline{\varvec{Q}}(m;\vartheta ) =\underline{\varvec{\varSigma }}(m)\underline{\varvec{D}}(m)^{-1}\underline{\varvec{W}}(m;\vartheta )\underline{\varvec{\varSigma }}(m)^{-1}\). With the short notation
it follows from (7.9) and \(q_{\ell ,j}(m;\vartheta )=\varvec{e}_{\ell }^{\top }(m)\underline{\varvec{Q}}(m;\vartheta )\varvec{e}_j(m)\) that
Below we are going to express three terms in (7.12). To that end, we start this work from the calculation of \(\underline{\varvec{\varSigma }}(m)^{-1}\).
By denoting (here and below we omit (m) for the matrix entry)
the definition procedure of the temporal differences of stage solutions can be written into the matrix form
Recalling the definition of the evolution identity, the matrix inversion on both sides of the above identity yields
where we have used (7.10) to get \(\underline{\varvec{\varPhi }}(m)^{-1}=\underline{\varvec{D}}(m)\underline{\varvec{\varSigma }}(m)^{-1}\). Comparing with the matrices entries on both sides, we can achieve the following equalities for every column in the matrix \(\underline{\varvec{\varSigma }}(m)^{-1}\),
and for every evolution coefficient in (3.11),
Then, an induction process for (7.13) yields that
and the matrix identity
For any \(\kappa \ge 0\), substituting (7.14b) into (7.11) yields
where (7.16) is used at the last step. Substituting (7.17) and (7.15) into (7.12), we finally have
In order to investigate the relationship between this quantity and the multistep number, we would like to make some (right) Kronecker product of matrices [25] to simplify each term in (7.18). For example, we will use
which implies
Due to the definition (3.3), we derive
where \(\underline{\varvec{W}}(\vartheta )=\underline{\varvec{W}}(1;\vartheta )\). Based on these identities, by some lengthy and tedious matrices manipulations, we can get the following important conclusions
In this process, we have used the following simple conclusions
and an important identity as a corollary of (7.14a) and \(\alpha _0(m)=1\),
For ease of reading, we present the verifications of (7.22) in Sect. 7.1.4.
With the help of (7.21), substituting (7.22c) and (7.22d) into (7.18) yields the final simplification expression
If needed, we can use (7.22b) to further deal with \(\underline{\varvec{K}}(m)\).
1.1.3 Proof of Lemma 7.1
To end this subsection, we need to prove the skipped Lemma 7.1. Since all related manipulation does not depend on the spatial discretization, the results given in [26, Lemma 3.1] still hold. Hence, for \(0\le j'\le \zeta \) and \(j'< i'\le ms\) we have
and for \(1\le i'\le \zeta \) we have
Based on the formulas in (7.26), we can prove this lemma by simple discussions for different cases of \(\ell \).
If \(\ell >i\), since \(\mathbb {B}^{\star }(m)\) is symmetric, it follows from (7.5) that
This proves (7.8) by using (7.26a) with \(i'=\ell +1\) and \(j'=i\).
Otherwise, if \(\ell \le i\), we similarly have from (7.26a) that
To show it can be written in (7.8), we just need to show \(\varUpsilon =0\), with
Here we have respectively used the replacements of index \(\kappa '=\ell -\kappa \) and \(\kappa '=\ell +1+\kappa \) in two summations of the first equality. The verification is easy as follows.
-
If \(\ell +i+1\) is odd, the replacement \(\kappa '=i+\ell +1-\kappa \) implies \(\varUpsilon =(-1)^{i+\ell +1}\varUpsilon \) and hence \(\varUpsilon =0\).
-
Otherwise, if \(\ell +i+1\) is even, denoted by 2L, a simple replacement of summation index again reduces
$$\begin{aligned} (-1)^{\ell -L}\varUpsilon = \sum _{-L\le \kappa \le L} (-1)^\kappa \alpha _{L+\kappa }(m)\alpha _{L-\kappa }(m) =a_{L,L}^{(L)}(m), \end{aligned}$$where the last step uses (7.26b). Since \(L<\zeta \), it follows \(a_{L,L}^{(L)}(m)=0\) from the definition of \(\zeta \). This implies \(\varUpsilon =0\) also.
Till now we sum up the above conclusions and complete the proof of this lemma.
1.1.4 Verifications of (7.22)
To verify the first identity (7.22a), we start from the definition of \(\underline{\varvec{S}}(m)\). Substituting the identities (7.19), (7.21) and (7.20), we have
Expanding the right-hand side and using the definition of \(\underline{\varvec{S}}\), after some manipulations we have
Since \((\hat{\underline{\varvec{E}}})^m\) is a zero matrix, the inverse of the first matrix is expressed by
Using (7.24), we have for any \(i\ge 1\) that
Summing up the above identities, we have
where we have used the definition (7.1) of \(\hat{\underline{\varvec{L}}}\) at the second step. This completes the verification of (7.22a).
We start the verification of (7.22b) from the definition (7.13b) of \(\underline{\varvec{K}}(m)\). Substituting the identities (7.20), (7.22a) and (7.21), we have
Expanding the right-hand side, using (7.24) and the first identity in (7.23), we achieve
where at the last step we have used the definitions of \(\varvec{q}\) and \(\varvec{p}^\top \) in (7.13a) and (7.14b). This completes the verification of (7.22b).
The third identity (7.22c) is verified along the same line. Starting from the definition of \(\varvec{p}(m)^{\top }\) in (7.14b), and substituting the identities (7.19), (7.22a) and (7.21), we have
Expanding the above expression and using (7.24), we have
where at the last step we have used the second identity in (7.23) and the definition of \(\varvec{p}^\top \) in (7.14b). This proves (7.22c).
The fourth identity (7.22d) is verified similarly. To save the length of this paper, we omit the detailed procedure.
1.2 Some Proofs
In this subsection we would like to prove Lemmas 3.6 and 3.7, as well as Propositions 3.1 and 3.2.
1.2.1 Proof of Lemma 3.6
Recalling the definition of \(\pi _{\kappa ,j}(m;\vartheta )\), given in (7.9), it follows from (3.16) and (3.27) that \(\varTheta (m)=\vartheta +\pi _{00}(m;\vartheta )\). Substituting (7.25) implies that
where the simple fact \(\hat{\varvec{1}}^\top \hat{\underline{\varvec{I}}}\hat{\varvec{1}}=m\) is used. This completes the proof of Lemma 3.6.
Remark 7.1
Taking \(m=1\) and \(\vartheta =\varTheta \) in (7.27), we use Lemma 3.6 to get
This is just the conclusion in Lemma 3.5 with \(m=1\). As an essence property of the averaged numerical flux parameter, it plays an important role in the proof of Lemma 3.7.
1.2.2 Proof of Lemma 3.7
For convenience of notations, in what follows we use a generic notation C to denote a positive constant independent of m. Recalling the proof of [26, Proposition 3.3], we have for \(0\le i,j\le \zeta -1\) that
where \(\{\frac{2}{i!j!(i+j+1)}\}_{0\le i,j\le \zeta -1}\) forms a symmetric positive definite matrix congruent to an Hilbert matrix. Since \(\varTheta >1/2\), it follows from (7.3) and (7.6) with \(\vartheta =\varTheta \) that we can prove this lemma by showing that \(z_{ij}(m;\varTheta )\) for \(0\le i,j\le \zeta -1\) all tends to zero as m goes to infinity. By (7.9), it is sufficient to prove
since [26, inequality (3.16)] shows that \(\alpha _{i-\kappa }(m)\) is bounded independent of m.
Denote \(\pi _{\kappa ,j}=\pi _{\kappa ,j}(m;\varTheta )\) and \(\underline{\varvec{W}}=\underline{\varvec{W}}(\varTheta )\) for simplicity. Below we are going to prove (7.30) for different cases of \(\kappa \) and j, where (7.28) plays an important role to well control the accumulation and growth as m goes to infinity.
-
If \(\kappa =j=0\), we have \( \pi _{0,0}= (\hat{\varvec{1}}^{\top }\hat{\underline{\varvec{I}}}\hat{\varvec{1}})\otimes (\varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}\varvec{q})=0\), due to (7.28).
-
If \(\kappa >0\) and \(j>0\), we have
$$\begin{aligned} \pi _{\kappa ,j} = \frac{1}{m} \Big (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }\Big ) [\underline{\varvec{K}}(m)]^{\kappa -1} \varvec{\varPi }_{\kappa ,j}(m) [\underline{\varvec{K}}(m)]^{j-1} \Big (\hat{\varvec{1}}\otimes \varvec{q}\Big ), \end{aligned}$$(7.31)where \( \varvec{\varPi }_{\kappa ,j}(m) = \underline{\varvec{K}}(m) \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\Big ) \underline{\varvec{K}}(m)\). Substituting (7.22b) into this formula and then using (7.28) to eliminate the term involving \(\hat{\underline{\varvec{L}}}^2\). After some manipulations we yield
$$\begin{aligned} \begin{aligned} \varvec{\varPi }_{\kappa ,j}(m) =&\; \frac{1}{m^2}\hat{\underline{\varvec{L}}}\otimes [ \varvec{q}\varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\varvec{q}\varvec{p}^{\top } ]\\&\; +\frac{1}{m^2}\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}. \end{aligned} \end{aligned}$$The row norms for all matrices (including the row vectors and column vectors) do not depend on m, except that \(\Vert \hat{\underline{\varvec{L}}}\Vert _\infty =m-1\). Hence we have
$$\begin{aligned} \Vert \varvec{\varPi }_{\kappa ,j}(m)\Vert _{\infty }\le \frac{C}{m}. \end{aligned}$$Noticing \(\Vert \frac{1}{m} (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top })\Vert _{\infty }\le C\) and \(\Vert \underline{\varvec{K}}(m)\Vert _{\infty }\le C\), we get from (7.31) what we want to prove.
-
If \(\kappa =0\) and \(j>0\), we have \( \pi _{0,j}=\frac{1}{m} \varvec{\varPi }_{0,j}(m)[\underline{\varvec{K}}(m)]^{j-1}(\hat{\varvec{1}}\otimes \varvec{q})\) with
$$\begin{aligned} \varvec{\varPi }_{0,j}(m)= \Big (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }\Big ) \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\Big )\underline{\varvec{K}}(m) = \frac{1}{m}\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top } \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}, \end{aligned}$$by some manipulations with the help of (7.22b) and (7.28). The remaining proof follows the same line as above, hence is omitted.
-
If \(\kappa >0\) and \(j=0\), we have \(\pi _{\kappa ,0}= \frac{1}{m}(\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }) [\underline{\varvec{K}}(m)]^{\kappa -1}\varvec{\varPi }_{\kappa ,0}(m)\), where
$$\begin{aligned} \varvec{\varPi }_{\kappa ,0}(m) = \underline{\varvec{K}}(m) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}) (\hat{\varvec{1}}\otimes \varvec{q}) = \hat{\varvec{1}}\otimes \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\varvec{q}, \end{aligned}$$with the help of (7.22b) and (7.28). Then we can prove (7.31) as above.
Summing up the above conclusions, we verify (7.30) and then prove this lemma.
1.2.3 Proof of Propositions 3.1 and 3.2
Taking \(\vartheta =0\) in (7.27) and substituting the definition of \(\varvec{p}^{\top }\) and \(\varvec{q}\), we have
This identity will be used to prove these propositions.
Since we have assumed \(c_{\ell \kappa }\ge 0\) for any \(\ell \) and \(\kappa \) in this paper, all entries of \(\underline{\varvec{S}}^{-1}\) are non-negative due to the simple fact
Hence we can conclude from (7.32) that \(\varTheta \) is a non-negative linear combination of the entries of \(\underline{\varvec{W}}(0)=\{d_{\ell \kappa }\theta _{\ell \kappa }\}_{0\le \ell ,\kappa \le s-1}\). As a trivial conclusion for special case that all numerical flux parameters are the same, it is easy to conclude that \(\varTheta \) is a weighted average of \(\theta _{\ell \kappa }\). This proves Proposition 3.1.
Remark 7.2
This is the only place that the condition \(c_{\ell \kappa }\ge 0\) is used in this paper.
For the LWDG method with the time marching coefficients (2.10), we have \(\underline{\varvec{S}}=\underline{\varvec{I}}\) and then get from (7.32) that
since \(\underline{\varvec{I}}+\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{C}}=\underline{\varvec{I}}+\underline{\varvec{E}}\underline{\varvec{C}}=\underline{\varvec{I}}\). This completes the proof of Proposition 3.2.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xu, Y., Shu, CW. & Zhang, Q. Stability Analysis and Error Estimate of the Explicit Single-Step Time-Marching Discontinuous Galerkin Methods with Stage-Dependent Numerical Flux Parameters for a Linear Hyperbolic Equation in One Dimension. J Sci Comput 100, 64 (2024). https://doi.org/10.1007/s10915-024-02621-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-024-02621-2
Keywords
- Discontinuous Galerkin method
- Explicit single step time marching
- Stage-dependent numerical flux parameters
- Hyperbolic equation
- Stability analysis and error estimate