Log in

Stability Analysis and Error Estimate of the Explicit Single-Step Time-Marching Discontinuous Galerkin Methods with Stage-Dependent Numerical Flux Parameters for a Linear Hyperbolic Equation in One Dimension

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, we present the \(\hbox {L}^2\)-norm stability analysis and error estimate for the explicit single-step time-marching discontinuous Galerkin (DG) methods with stage-dependent numerical flux parameters, when solving a linear constant-coefficient hyperbolic equation in one dimension. Two well-known examples of this method include the Runge–Kutta DG method with the downwind treatment for the negative time marching coefficients, as well as the Lax–Wendroff DG method with arbitrary numerical flux parameters to deal with the auxiliary variables. The stability analysis framework is an extension and an application of the matrix transferring process based on the temporal differences of stage solutions, and a new concept, named as the averaged numerical flux parameter, is proposed to reveal the essential upwind mechanism in the fully discrete status. Distinguished from the traditional analysis, we have to present a novel way to obtain the optimal error estimate in both space and time. The main tool is a series of space–time approximation functions for a given spatial function, which preserve the local structure of the fully discrete schemes and the balance of exact evolution under the control of the partial differential equation. Finally some numerical experiments are given to validate the theoretical results proposed in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Canada)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Data Availability

The datasets generated during the current study are available from the corresponding author upon reasonable request.

References

  1. Ai, J., Xu, Y., Shu, C.W., Zhang, Q.: \({\rm L}^2\) error estimate to smooth solutions of high order Runge–Kutta discontinuous Galerkin method for scalar nonlinear conservation laws with and without sonic points. SIAM J. Numer. Anal. 60(4), 1741–1773 (2022). https://doi.org/10.1137/21M1435495

    Article  MathSciNet  Google Scholar 

  2. Chavent, G., Cockburn, B.: The local projection \(P^0P^1\)-discontinuous-Galerkin finite element method for scalar conservation laws. RAIRO Modél. Math. Anal. Numér. 23(4), 565–592 (1989). https://doi.org/10.1051/m2an/1989230405651

    Article  MathSciNet  Google Scholar 

  3. Cheng, Y., Meng, X., Zhang, Q.: Application of generalized Gauss–Radau projections for the local discontinuous Galerkin method for linear convection–diffusion equations. Math. Comp. 86(305), 1233–1267 (2017). https://doi.org/10.1090/mcom/3141

    Article  MathSciNet  Google Scholar 

  4. Ciarlet, P.G.: The finite element method for elliptic problems. In: Studies in Mathematics and its Applications, vol. 4. North-Holland Publishing Co., New York (1978)

  5. Cockburn, B., Hou, S., Shu, C.W.: The Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. IV. The multidimensional case. Math. Comput. 54(190), 545–581 (1990). https://doi.org/10.2307/2008501

    Article  MathSciNet  Google Scholar 

  6. Cockburn, B., Lin, S.Y., Shu, C.W.: TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. III. One-dimensional systems. J. Comput. Phys. 84(1), 90–113 (1989). https://doi.org/10.1016/0021-9991(89)90183-6

    Article  MathSciNet  Google Scholar 

  7. Cockburn, B., Shu, C.W.: TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws. II. General framework. Math. Comput. 52(186), 411–435 (1989). https://doi.org/10.2307/2008474

    Article  MathSciNet  Google Scholar 

  8. Cockburn, B., Shu, C.W.: The Runge–Kutta local projection \(P^1\)-discontinuous-Galerkin finite element method for scalar conservation laws. RAIRO Modél. Math. Anal. Numér. 25(3), 337–361 (1991). https://doi.org/10.1051/m2an/1991250303371

    Article  MathSciNet  Google Scholar 

  9. Cockburn, B., Shu, C.W.: The Runge–Kutta discontinuous Galerkin method for conservation laws. V. Multidimensional systems. J. Comput. Phys. 141(2), 199–224 (1998). https://doi.org/10.1006/jcph.1998.5892

    Article  MathSciNet  Google Scholar 

  10. Cockburn, B., Shu, C.W.: Runge–Kutta discontinuous Galerkin methods for convection-dominated problems. J. Sci. Comput. 16(3), 173–261 (2001). https://doi.org/10.1023/A:1012873910884

    Article  MathSciNet  Google Scholar 

  11. Gottlieb, S., Ruuth, S.J.: Optimal strong-stability-preserving time-step** schemes with fast downwind spatial discretizations. J. Sci. Comput. 27(1–3), 289–303 (2006). https://doi.org/10.1007/s10915-005-9054-8

    Article  MathSciNet  Google Scholar 

  12. Gottlieb, S., Shu, C.W.: Total variation diminishing Runge–Kutta schemes. Math. Comput. 67(221), 73–85 (1998). https://doi.org/10.1090/S0025-5718-98-00913-2

    Article  MathSciNet  Google Scholar 

  13. Guo, W., Qiu, J., Qiu, J.: A new Lax–Wendroff discontinuous Galerkin method with superconvergence. J. Sci. Comput. 65(1), 299–326 (2015). https://doi.org/10.1007/s10915-014-9968-0

    Article  MathSciNet  Google Scholar 

  14. Liu, Y., Shu, C.W., Zhang, M.: Sub-optimal convergence of discontinuous Galerkin methods with central fluxes for linear hyperbolic equations with even degree polynomial approximations. J. Comput. Math. 39(4), 518–537 (2021). https://doi.org/10.4208/jcm.2002-m2019-0305

    Article  MathSciNet  Google Scholar 

  15. Qiu, J., Zhang, Q.: Stability, error estimate and limiters of discontinuous Galerkin methods. In: Handbook of Numerical Methods for Hyperbolic Problems, Handbook of Numerical Analysis, vol. 17, pp. 147–171. Elsevier, Amsterdam (2016). https://doi.org/10.1016/bs.hna.2016.06.001

  16. Ruuth, S.J.: Global optimization of explicit strong-stability-preserving Runge–Kutta methods. Math. Comput. 75(253), 183–207 (2006). https://doi.org/10.1090/S0025-5718-05-01772-2

    Article  MathSciNet  Google Scholar 

  17. Ruuth, S.J., Spiteri, R.J.: Two barriers on strong-stability-preserving time discretization methods. J. Sci. Comput. 17(1–4), 211–220 (2002). https://doi.org/10.1023/A:1015156832269

    Article  MathSciNet  Google Scholar 

  18. Ruuth, S.J., Spiteri, R.J.: High-order strong-stability-preserving Runge–Kutta methods with downwind-biased spatial discretizations. SIAM J. Numer. Anal. 42(3), 974–996 (2004). https://doi.org/10.1137/S0036142902419284

    Article  MathSciNet  Google Scholar 

  19. Shu, C.W.: Total-variation-diminishing time discretizations. SIAM J. Sci. Stat. Comput. 9(6), 1073–1084 (1988). https://doi.org/10.1137/0909073

    Article  MathSciNet  Google Scholar 

  20. Shu, C.W.: Discontinuous Galerkin methods: general approach and stability. In: Numerical Solutions of Partial Differential Equations, Adv. Courses Math. CRM Barcelona, pp. 149–201. Birkhäuser, Basel (2009)

  21. Shu, C.W.: Discontinuous Galerkin methods for time-dependent convection dominated problems: basics, recent developments and comparison with other methods. In: Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, Lecture Notes in Computer Science Engineering, vol. 114, pp. 369–397. Springer, New York (2016)

  22. Shu, C.W., Osher, S.: Efficient implementation of essentially nonoscillatory shock-capturing schemes. J. Comput. Phys. 77(2), 439–471 (1988). https://doi.org/10.1016/0021-9991(88)90177-5

    Article  MathSciNet  Google Scholar 

  23. Sun, Z., Shu, C.W.: Stability analysis and error estimates of Lax–Wendroff discontinuous Galerkin methods for linear conservation laws. ESAIM Math. Model. Numer. Anal. 51(3), 1063–1087 (2017). https://doi.org/10.1051/m2an/2016049

    Article  MathSciNet  Google Scholar 

  24. Sun, Z., Shu, C.W.: Strong stability of explicit Runge–Kutta time discretizations. SIAM J. Numer. Anal. 57(3), 1158–1182 (2019). https://doi.org/10.1137/18M122892X

    Article  MathSciNet  Google Scholar 

  25. Van Loan, C.F.: The ubiquitous Kronecker product. J. Comput. Appl. Math. 123(1–2), 85–100 (2000). https://doi.org/10.1016/S0377-0427(00)00393-9

    Article  MathSciNet  Google Scholar 

  26. Xu, Y., Meng, X., Shu, C.W., Zhang, Q.: Superconvergence analysis of the Runge–Kutta discontinuous Galerkin methods for a linear hyperbolic equation. J. Sci. Comput. 84, 23 (2020). https://doi.org/10.1007/s10915-020-01274-1

    Article  MathSciNet  Google Scholar 

  27. Xu, Y., Shu, C.W., Zhang, Q.: Error estimate of the fourth-order Runge–Kutta discontinuous Galerkin methods for linear hyperbolic equations. SIAM J. Numer. Anal. 58(5), 2885–2914 (2020). https://doi.org/10.1137/19M1280077

    Article  MathSciNet  Google Scholar 

  28. Xu, Y., Zhang, Q.: Superconvergence analysis of the Runge–Kutta discontinuous Galerkin method with upwind-biased numerical flux for two dimensional linear hyperbolic equation. Commun. Appl. Math. Comput. 4, 319–352 (2022). https://doi.org/10.1007/s42967-020-00116-z

    Article  MathSciNet  Google Scholar 

  29. Xu, Y., Zhang, Q., Shu, C.W., Wang, H.: The \(\text{ L}^2\)-norm stability analysis of Runge–Kutta discontinuous Galerkin methods for linear hyperbolic equations. SIAM J. Numer. Anal. 57(4), 1574–1601 (2019). https://doi.org/10.1137/18M1230700

    Article  MathSciNet  Google Scholar 

  30. Xu, Y., Zhao, D., Zhang, Q.: Local error estimates for Runge–Kutta discontinuous Galerkin methods with upwind-biased numerical fluxes for a linear hyperbolic equation in one-dimension with discontinuous initial data. J. Sci. Comput. 91, 11 (2022). https://doi.org/10.1007/s10915-022-01793-z

    Article  MathSciNet  Google Scholar 

  31. Zhang, Q., Shu, C.W.: Error estimates to smooth solutions of Runge–Kutta discontinuous Galerkin methods for scalar conservation laws. SIAM J. Numer. Anal. 42(2), 641–666 (2004). https://doi.org/10.1137/S0036142902404182

    Article  MathSciNet  Google Scholar 

  32. Zhang, Q., Shu, C.W.: Stability analysis and a priori error estimates of the third order explicit Runge–Kutta discontinuous Galerkin method for scalar conservation laws. SIAM J. Numer. Anal. 48(3), 1038–1063 (2010). https://doi.org/10.1137/090771363

    Article  MathSciNet  Google Scholar 

Download references

Funding

Yuan Xu is supported by NSFC Grant 12301513, Natural Science Foundation of Jiangsu Province Grant BK20230374 and Natural Science Foundation of Jiangsu Higher Education Institutions of China Grant 23KJB110019. Chi-Wang Shu is supported by NSF grant DMS-2309249. Qiang Zhang is supported by NSFC Grant 12071214.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Zhang.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this section we give some supplemental materials for those conclusions unproved in Sect. 3. This process involves many notations and manipulations of matrices.

To do that, we give some elemental notations here. Associated with the multistep number m and the stage number s, we introduce some column vectors and square matrices of size ms, whose component is either 0 or 1. More specifically, we denote \(\varvec{1}(m,s)=(1,1,\ldots ,1)^\top \) and let \(\varvec{e}_i(m,s)\), for \(0\le i\le ms-1\), be the unit vector which has 1 at the i-th position. Let \(\underline{\varvec{I}}(m,s)\) be the identity matrix and \(\underline{\varvec{E}}(m,s)\) be the shifting matrix which has 1 at the lower second diagonal line. Then we define

$$\begin{aligned} \underline{\varvec{L}}(m,s)=\Big [\underline{\varvec{I}}(m,s)-\underline{\varvec{E}}(m,s)\Big ]^{-1}-\underline{\varvec{I}}(m,s)= \sum _{1\le \kappa \le ms-1}\underline{\varvec{E}}(m,s)^{\kappa }, \end{aligned}$$
(7.1)

which has 1 at the strictly lower region. For simplicity of notations, we would like to denote, for example

$$\begin{aligned} \varvec{1}(m)=\varvec{1}(m,s), \quad \varvec{1}=\varvec{1}(1,s), \quad \hat{\varvec{1}}=\varvec{1}(m,1). \end{aligned}$$

This notation rule will be used throughout the entire section.

1.1 Matrix Description of the Ultimate Spatial Matrix

In this subsection we devote to presenting a matrix description of how to get the ultimate spatial matrix. To do that, we define the ms order matrices

$$\begin{aligned} \underline{\varvec{C}}(m) = \{c_{ij}(m)\}, \quad \underline{\varvec{D}}(m) = \{d_{ij}(m)\}, \quad \underline{\varvec{W}}(m;\vartheta )=\{d_{ij}(m)(\theta _{ij}(m)-\vartheta )\}, \end{aligned}$$
(7.2a)

related to the description of the ESTDG method, and

$$\begin{aligned} \underline{\varvec{\varSigma }}(m)=\{\sigma _{ij}(m)\}, \quad \underline{\varvec{\varPhi }}(m)=\{\phi _{ij}(m)\}, \quad \underline{\varvec{Q}}(m;\vartheta )=\{q_{ij}(m;\vartheta )\}, \end{aligned}$$
(7.2b)

related to the definition of temporal differences of stage solutions. Here all indices i and j are taken from 0 to \(ms-1\), and \(\vartheta \) is the parameter as mentioned in subsection 3.1.1. We would like to remark that all data in the above matrices are set to be zero, if they are not clearly stated or defined.

1.1.1 Elemental Formula

The ultimate spatial matrix is obtained by running Algorithm 1 for \(\ell =1,2,\ldots ,\zeta \), where the crucial calculation is the increment accumulation in Step 2.

To do that, we define a lower triangle matrix \(\underline{\varvec{A}}^\star (m)=\{a_{ij}^\star (m)\}_{0\le i,j\le ms-1}\), whose entries are all defined to be zero except

$$\begin{aligned} a_{ij}^\star (m)=(1-\delta _{ij}/2)a_{i+1,j}^{(j)}(m), \quad \text{ for }\quad j\le i\le ms-1\quad \text{ and }\quad 0\le j\le \zeta -1. \end{aligned}$$

Since \(\{\tilde{q}_{ij}(m)\}_{0\le i,j\le ms-1}\) is a lower triangle matrix, all summation ranges in Step 2 can be enlarged to \(\{0,1,\ldots ,ms-1\}\). Gathering up the related operation till the matrix transferring process stops, we can obtain a unified description for the increment procedure at any fixed position. More specifically, the integrated calculation at every \((i',j')\) position reads (drop** (m) here for convenience)

$$\begin{aligned} g_{i'j'}\leftarrow g_{i'j'} -a_{i'j'}^\star ; \quad g_{i'j'}\leftarrow g_{i'j'} + a_{\kappa ' j'}^\star {\tilde{q}}_{\kappa ' i'}, \quad g_{i'j'}\leftarrow g_{i'j'} + a_{i'\kappa '}^\star \tilde{q}_{\kappa ' j'}, \end{aligned}$$

where \(i',j'\) and \(\kappa '\) go through \(\{0,1,\ldots ,ms-1\}\). As a result, the total increment at Step 2 of Algorithm 1 can be expressed in the matrix form

$$\begin{aligned} \underline{\varvec{G}}(m) = (2\vartheta -1)\underline{\varvec{A}}^\star (m) +\underline{\varvec{Q}}(m;\vartheta )^\top \underline{\varvec{A}}^\star (m) +\underline{\varvec{A}}^\star (m)\underline{\varvec{Q}}(m;\vartheta ), \end{aligned}$$

where the definition (3.16), i.e., \(\tilde{q}_{ij}(m)=q_{ij}(m;\vartheta )+\vartheta \delta _{ij}\) is used.

From Step 3 of Algorithm 1, we have the ultimate spatial matrix (below the last row and column is dropped, since they are always zero)

$$\begin{aligned} \begin{aligned} \mathbb {B}(m)=&\; \underline{\varvec{G}}(m)+\underline{\varvec{G}}(m)^\top \\ =&\; \Big (\vartheta -\frac{1}{2}\Big )\underline{\varvec{B}}^{\star }(m) +\frac{1}{2}\Big [\underline{\varvec{B}}^{\star }(m)\underline{\varvec{Q}}(m;\vartheta )+ \underline{\varvec{Q}}(m;\vartheta )^{\top }\underline{\varvec{B}}^{\star }(m)\Big ], \end{aligned} \end{aligned}$$
(7.3)

with the symmetric matrix

$$\begin{aligned} \underline{\varvec{B}}^{\star }(m) =2\underline{\varvec{A}}^\star (m)+2\underline{\varvec{A}}^\star (m)^\top =\{b_{ij}^{\star }(m)\}_{0\le i,j\le ms-1}. \end{aligned}$$
(7.4)

The entry at the lower triangular zone is defined as

$$\begin{aligned} b_{ij}^{\star }(m) = {\left\{ \begin{array}{ll} 2a_{i+1,j}^{(j)}(m),&{}0\le j\le \zeta -1 \text{ and } j\le i\le ms-1, \\ 0,&{}\text{ otherwise }, \end{array}\right. } \end{aligned}$$
(7.5)

as the same as in the ultimate spatial matrix in [26] for the RKDG methods with a fixed numerical flux parameter.

To investigate the property of the second term in (7.3), we just need to study the perturbation matrix

$$\begin{aligned} \underline{\varvec{Z}}(m;\vartheta )=\underline{\varvec{B}}^{\star }(m)\underline{\varvec{Q}}(m;\vartheta )= \{z_{ij}(m;\vartheta )\}_{0\le i,j\le ms-1}. \end{aligned}$$
(7.6)

Taking into account on definition of the contribution index, we only pay attention on those left-top entries in (7.6). In what follows we try to deduce a convenient and unified formula for

$$\begin{aligned} z_{ij}(m;\vartheta ) = \sum _{0\le \ell \le ms-1} b_{i\ell }^{\star }(m)q_{\ell ,j}(m;\vartheta ), \quad 0\le i,j\le \zeta -1. \end{aligned}$$
(7.7)

The formula of every \(b_{i\ell }^{\star }(m)\) has been given in [26], but is variant according to the size relationship of i and \(\ell \). In this paper we have to rebuild an equivalent and unified formula, as stated in the next lemma.

Lemma 7.1

For \(0\le i\le \zeta -1\), there holds

$$\begin{aligned} b_{i\ell }^{\star }(m)= 2\sum _{0\le \kappa \le i}(-1)^\kappa \alpha _{i-\kappa }(m)\alpha _{\ell +1+\kappa }(m), \quad 0\le \ell \le ms-1. \end{aligned}$$
(7.8)

Here and below we define \(\alpha _{i'}(m)=0\) if \(i'>ms\) for simplicity.

We put aside the proof of this lemma in Sect. 7.1.3. Substituting (7.8) into (7.7) deduces for any \(0\le i,j\le \zeta -1\) that

$$\begin{aligned} z_{ij}(m;\vartheta ) = \sum _{0\le \kappa \le i} 2(-1)^{\kappa }\alpha _{i-\kappa }(m) \underbrace{\sum _{0\le \ell \le ms-1} \alpha _{\ell +1+\kappa }(m)q_{\ell ,j}(m;\vartheta )}_{\pi _{\kappa ,j}(m;\vartheta )}. \end{aligned}$$
(7.9)

In what follows we would like to set up a useful formula of \(\pi _{\kappa ,j}(m;\vartheta )\) by those data to define the ESTDG method.

1.1.2 Formula of \(\pi _{\kappa ,j}(m;\vartheta )\)

Due to (3.13) and (3.8), we can respectively obtain

$$\begin{aligned} \underline{\varvec{Q}}(m;\vartheta )\underline{\varvec{\varSigma }}(m)=\underline{\varvec{\varPhi }}(m)\underline{\varvec{W}}(m;\vartheta ), \quad \underline{\varvec{\varPhi }}(m)\underline{\varvec{D}}(m)=\underline{\varvec{\varSigma }}(m). \end{aligned}$$
(7.10)

This implies \(\underline{\varvec{Q}}(m;\vartheta ) =\underline{\varvec{\varSigma }}(m)\underline{\varvec{D}}(m)^{-1}\underline{\varvec{W}}(m;\vartheta )\underline{\varvec{\varSigma }}(m)^{-1}\). With the short notation

$$\begin{aligned} \varvec{y}^{\top }(m)= \sum _{0\le \ell \le ms-1} \alpha _{\ell +1+\kappa }(m)\varvec{e}_{\ell }^{\top }(m) \underline{\varvec{\varSigma }}(m), \end{aligned}$$
(7.11)

it follows from (7.9) and \(q_{\ell ,j}(m;\vartheta )=\varvec{e}_{\ell }^{\top }(m)\underline{\varvec{Q}}(m;\vartheta )\varvec{e}_j(m)\) that

$$\begin{aligned} \pi _{\kappa ,j}(m;\vartheta ) = \varvec{y}^{\top }(m) \cdot \Big [\underline{\varvec{D}}^{-1}(m)\underline{\varvec{W}}(m;\vartheta )\Big ]\cdot \Big [\underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_j(m)\Big ]. \end{aligned}$$
(7.12)

Below we are going to express three terms in (7.12). To that end, we start this work from the calculation of \(\underline{\varvec{\varSigma }}(m)^{-1}\).

By denoting (here and below we omit (m) for the matrix entry)

$$\begin{aligned} \underline{\varvec{S}}(m)=\underline{\varvec{I}}(m)-\underline{\varvec{C}}(m)\underline{\varvec{E}}(m) = \begin{pmatrix} 1\\ -c_{11}&{}1\\ -c_{21}&{}-c_{22}&{}1\\ \vdots &{}\vdots &{}&{}\ddots \\ -c_{ms-1,1}&{}-c_{ms-1,2}&{}\cdots &{}-c_{ms-1,ms-1}&{}1 \end{pmatrix}, \end{aligned}$$

the definition procedure of the temporal differences of stage solutions can be written into the matrix form

Recalling the definition of the evolution identity, the matrix inversion on both sides of the above identity yields

where we have used (7.10) to get \(\underline{\varvec{\varPhi }}(m)^{-1}=\underline{\varvec{D}}(m)\underline{\varvec{\varSigma }}(m)^{-1}\). Comparing with the matrices entries on both sides, we can achieve the following equalities for every column in the matrix \(\underline{\varvec{\varSigma }}(m)^{-1}\),

$$\begin{aligned} \underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_0(m) =&\; [\underline{\varvec{I}}(m)+\underline{\varvec{E}}(m)\underline{\varvec{S}}(m)^{-1}\underline{\varvec{C}}(m)]\varvec{e}_0(m) \overset{{\tiny \text{ def }}}{=}\varvec{q}(m), \end{aligned}$$
(7.13a)
$$\begin{aligned} \underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_j(m) =&\; \underbrace{\underline{\varvec{E}}(m)\underline{\varvec{S}}(m)^{-1}\underline{\varvec{D}}(m)}_{\underline{\varvec{K}}(m)} \underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_{j-1}(m), \quad j\ge 1, \end{aligned}$$
(7.13b)

and for every evolution coefficient in (3.11),

$$\begin{aligned} \alpha _0(m) =&\; \varvec{e}_{ms-1}(m)^{\top }\underline{\varvec{S}}(m)^{-1}\underline{\varvec{C}}(m)\varvec{e}_0(m), \end{aligned}$$
(7.14a)
$$\begin{aligned} \alpha _j(m) =&\; \underbrace{\varvec{e}_{ms-1}(m)^{\top }\underline{\varvec{S}}(m)^{-1}\underline{\varvec{D}}(m)}_{\varvec{p}^{\top }(m)} \underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_{j-1}(m), \quad j\ge 1. \end{aligned}$$
(7.14b)

Then, an induction process for (7.13) yields that

$$\begin{aligned} \underline{\varvec{\varSigma }}(m)^{-1}\varvec{e}_j(m)=\underline{\varvec{K}}(m)^j\varvec{q}(m), \quad j\ge 0, \end{aligned}$$
(7.15)

and the matrix identity

$$\begin{aligned} \underline{\varvec{\varSigma }}(m)^{-1}\underline{\varvec{E}}(m) = \underline{\varvec{K}}(m)\underline{\varvec{\varSigma }}(m)^{-1}. \end{aligned}$$
(7.16)

For any \(\kappa \ge 0\), substituting (7.14b) into (7.11) yields

$$\begin{aligned} \begin{aligned} \varvec{y}^{\top }(m) =&\; \varvec{p}(m)^{\top }\underline{\varvec{\varSigma }}(m)^{-1} \left[ \sum _{0\le \ell \le ms-1} \varvec{e}_{\ell +\kappa }(m)\varvec{e}_{\ell }(m)^{\top }\right] \underline{\varvec{\varSigma }}(m)\\ =&\; \varvec{p}(m)^{\top }\underline{\varvec{\varSigma }}(m)^{-1}\underline{\varvec{E}}(m)^{\kappa }\underline{\varvec{\varSigma }}(m)\\ =&\; \varvec{p}(m)^{\top }[\underline{\varvec{\varSigma }}(m)^{-1}\underline{\varvec{E}}(m)\underline{\varvec{\varSigma }}(m)]^{\kappa } = \varvec{p}(m)^{\top }\underline{\varvec{K}}(m)^{\kappa }, \end{aligned} \end{aligned}$$
(7.17)

where (7.16) is used at the last step. Substituting (7.17) and (7.15) into (7.12), we finally have

$$\begin{aligned} \pi _{\kappa ,j}(m;\vartheta ) = \varvec{p}(m)^{\top }\underline{\varvec{K}}(m)^{\kappa } \underline{\varvec{D}}^{-1}(m)\underline{\varvec{W}}(m;\vartheta ) \underline{\varvec{K}}(m)^j\varvec{q}(m). \end{aligned}$$
(7.18)

In order to investigate the relationship between this quantity and the multistep number, we would like to make some (right) Kronecker product of matrices [25] to simplify each term in (7.18). For example, we will use

$$\begin{aligned} \varvec{e}_0(m)=\hat{\varvec{e}}_0\otimes \varvec{e}_0, \quad \varvec{e}_{ms-1}(m)^{\top } = \hat{\varvec{e}}_{m-1}^\top \otimes \varvec{e}_{s-1}^{\top }, \quad \underline{\varvec{I}}(m) = \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}, \end{aligned}$$
(7.19)

which implies

$$\begin{aligned} \underline{\varvec{E}}(m) = \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}+\hat{\underline{\varvec{E}}}\otimes \varvec{e}_0\varvec{e}_{s-1}^{\top }. \end{aligned}$$
(7.20)

Due to the definition (3.3), we derive

$$\begin{aligned} \underline{\varvec{C}}(m) =\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{C}}, \quad \underline{\varvec{D}}(m) =\frac{1}{m}\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}, \quad \underline{\varvec{W}}(m;\vartheta ) =\frac{1}{m}\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{W}}(\vartheta ), \end{aligned}$$
(7.21)

where \(\underline{\varvec{W}}(\vartheta )=\underline{\varvec{W}}(1;\vartheta )\). Based on these identities, by some lengthy and tedious matrices manipulations, we can get the following important conclusions

$$\begin{aligned} \underline{\varvec{S}}(m)^{-1} =&\; \hat{\underline{\varvec{L}}}\otimes \underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^{\top } \underline{\varvec{S}}^{-1}+ \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1}, \end{aligned}$$
(7.22a)
$$\begin{aligned} \underline{\varvec{K}}(m) =&\; \frac{1}{m}\Big [\hat{\underline{\varvec{L}}}\otimes \varvec{q}\varvec{p}^{\top }+ \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{D}}\Big ], \end{aligned}$$
(7.22b)
$$\begin{aligned} \varvec{p}(m)^{\top } =&\; \frac{1}{m}\hat{\varvec{1}}^\top \otimes \varvec{p}^{\top }, \end{aligned}$$
(7.22c)
$$\begin{aligned} \varvec{q}(m) =&\; \hat{\varvec{1}}\otimes \varvec{q}. \end{aligned}$$
(7.22d)

In this process, we have used the following simple conclusions

$$\begin{aligned} \hat{\underline{\varvec{E}}}+\hat{\underline{\varvec{E}}}\,\hat{\underline{\varvec{L}}}=\hat{\underline{\varvec{L}}}, \quad \hat{\varvec{e}}_{m-1}^{\top }+\hat{\varvec{e}}_{m-1}^{\top }\hat{\underline{\varvec{L}}}= \hat{\varvec{1}}^{\top }, \quad \hat{\varvec{e}}_0+\hat{\underline{\varvec{L}}}\hat{\varvec{e}}_0=\hat{\varvec{1}}, \end{aligned}$$
(7.23)

and an important identity as a corollary of (7.14a) and \(\alpha _0(m)=1\),

$$\begin{aligned} \varvec{e}_{ms-1}^{\top }(m)\underline{\varvec{S}}(m)^{-1}\underline{\varvec{C}}(m)\varvec{e}_0(m)=1. \end{aligned}$$
(7.24)

For ease of reading, we present the verifications of (7.22) in Sect. 7.1.4.

With the help of (7.21), substituting (7.22c) and (7.22d) into (7.18) yields the final simplification expression

$$\begin{aligned} \pi _{\kappa ,j}(m;\vartheta ) =\frac{1}{m} \Big (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }\Big ) \underline{\varvec{K}}(m)^\kappa \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}(\vartheta )\Big ) \underline{\varvec{K}}(m)^j \Big (\varvec{1}\otimes \varvec{q}\Big ). \end{aligned}$$
(7.25)

If needed, we can use (7.22b) to further deal with \(\underline{\varvec{K}}(m)\).

1.1.3 Proof of Lemma 7.1

To end this subsection, we need to prove the skipped Lemma 7.1. Since all related manipulation does not depend on the spatial discretization, the results given in [26, Lemma 3.1] still hold. Hence, for \(0\le j'\le \zeta \) and \(j'< i'\le ms\) we have

$$\begin{aligned} a_{i'j'}^{(j')}(m)= \sum _{0\le \kappa \le j'} (-1)^\kappa \alpha _{i'+\kappa }(m) \alpha _{j'-\kappa }(m), \end{aligned}$$
(7.26a)

and for \(1\le i'\le \zeta \) we have

$$\begin{aligned} a_{i'i'}^{(i')}(m)= \sum _{-i'\le \kappa \le i'} (-1)^\kappa \alpha _{i'+\kappa }(m)\alpha _{i'-\kappa }(m). \end{aligned}$$
(7.26b)

Based on the formulas in (7.26), we can prove this lemma by simple discussions for different cases of \(\ell \).

If \(\ell >i\), since \(\mathbb {B}^{\star }(m)\) is symmetric, it follows from (7.5) that

$$\begin{aligned} b_{i\ell }^{\star }(m) =b_{\ell i}^{\star }(m) =2a_{\ell +1,i}^{(i)}(m). \end{aligned}$$

This proves (7.8) by using (7.26a) with \(i'=\ell +1\) and \(j'=i\).

Otherwise, if \(\ell \le i\), we similarly have from (7.26a) that

$$\begin{aligned} b_{i\ell }^{\star }(m)=2a_{i+1,\ell }^{(\ell )}(m) = 2\sum _{0\le \kappa \le \ell } (-1)^\kappa \alpha _{i+1+\kappa }(m)\alpha _{\ell -\kappa }(m). \end{aligned}$$

To show it can be written in (7.8), we just need to show \(\varUpsilon =0\), with

$$\begin{aligned} \begin{aligned} \varUpsilon \overset{{\tiny \text{ def }}}{=}&\; \sum _{0\le \kappa \le \ell } (-1)^\kappa \alpha _{i+1+\kappa }(m)\alpha _{\ell -\kappa }(m) - \sum _{0\le \kappa \le i} (-1)^\kappa \alpha _{i-\kappa }(m)\alpha _{\ell +1+\kappa }(m)\\ =&\; \sum _{0\le \kappa \le \ell +i+1} (-1)^{\ell -\kappa } \alpha _{\kappa }(m)\alpha _{\ell +i+1-\kappa }(m). \end{aligned} \end{aligned}$$

Here we have respectively used the replacements of index \(\kappa '=\ell -\kappa \) and \(\kappa '=\ell +1+\kappa \) in two summations of the first equality. The verification is easy as follows.

  • If \(\ell +i+1\) is odd, the replacement \(\kappa '=i+\ell +1-\kappa \) implies \(\varUpsilon =(-1)^{i+\ell +1}\varUpsilon \) and hence \(\varUpsilon =0\).

  • Otherwise, if \(\ell +i+1\) is even, denoted by 2L, a simple replacement of summation index again reduces

    $$\begin{aligned} (-1)^{\ell -L}\varUpsilon = \sum _{-L\le \kappa \le L} (-1)^\kappa \alpha _{L+\kappa }(m)\alpha _{L-\kappa }(m) =a_{L,L}^{(L)}(m), \end{aligned}$$

    where the last step uses (7.26b). Since \(L<\zeta \), it follows \(a_{L,L}^{(L)}(m)=0\) from the definition of \(\zeta \). This implies \(\varUpsilon =0\) also.

Till now we sum up the above conclusions and complete the proof of this lemma.

1.1.4 Verifications of (7.22)

To verify the first identity (7.22a), we start from the definition of \(\underline{\varvec{S}}(m)\). Substituting the identities (7.19), (7.21) and (7.20), we have

$$\begin{aligned} \underline{\varvec{S}}(m) = \underline{\varvec{I}}(m) - \underline{\varvec{C}}(m)\underline{\varvec{E}}(m) = \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}- (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{C}}) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}+\hat{\underline{\varvec{E}}}\otimes \varvec{e}_0\varvec{e}_{s-1}^\top ). \end{aligned}$$

Expanding the right-hand side and using the definition of \(\underline{\varvec{S}}\), after some manipulations we have

$$\begin{aligned} \begin{aligned} \underline{\varvec{S}}(m) =&\; \hat{\underline{\varvec{I}}}\otimes (\underline{\varvec{I}}-\underline{\varvec{C}}\,\underline{\varvec{E}}) - \hat{\underline{\varvec{E}}}\otimes \underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \\ =&\; \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}- \hat{\underline{\varvec{E}}}\otimes \underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top = ( \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}- \hat{\underline{\varvec{E}}}\otimes \underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} ) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}). \end{aligned} \end{aligned}$$

Since \((\hat{\underline{\varvec{E}}})^m\) is a zero matrix, the inverse of the first matrix is expressed by

$$\begin{aligned} \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}+ \sum _{1\le i\le m-1} (\hat{\underline{\varvec{E}}})^i \otimes (\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1})^i. \end{aligned}$$

Using (7.24), we have for any \(i\ge 1\) that

$$\begin{aligned} (\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1})^i = \underline{\varvec{C}}\varvec{e}_0 ( \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} \underline{\varvec{C}}\varvec{e}_0 )^{i-1} \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} =\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}. \end{aligned}$$

Summing up the above identities, we have

$$\begin{aligned} \begin{aligned} \underline{\varvec{S}}(m)^{-1} =&\; (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1}) \Big [ \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}+ \sum _{1\le i\le m-1} (\hat{\underline{\varvec{E}}})^i \otimes \underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} \Big ]\\ =&\; (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1}) \Big ( \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{I}}+\hat{\underline{\varvec{L}}} \otimes \underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} \Big )\\ =&\; \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1} +\hat{\underline{\varvec{L}}} \otimes \underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}, \end{aligned} \end{aligned}$$

where we have used the definition (7.1) of \(\hat{\underline{\varvec{L}}}\) at the second step. This completes the verification of (7.22a).

We start the verification of (7.22b) from the definition (7.13b) of \(\underline{\varvec{K}}(m)\). Substituting the identities (7.20), (7.22a) and (7.21), we have

$$\begin{aligned} \begin{aligned}&\; m\underline{\varvec{K}}(m) = m\underline{\varvec{E}}(m)\underline{\varvec{S}}(m)^{-1}\underline{\varvec{D}}(m)\\&\quad =\;(\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}+\hat{\underline{\varvec{E}}}\otimes \varvec{e}_0\varvec{e}_{s-1}^\top ) ( \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1} + \hat{\underline{\varvec{L}}}\otimes \underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} ) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}). \end{aligned} \end{aligned}$$

Expanding the right-hand side, using (7.24) and the first identity in (7.23), we achieve

$$\begin{aligned} \begin{aligned} m\underline{\varvec{K}}(m) =&\; \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\underline{\varvec{L}}}\otimes \underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\underline{\varvec{L}}}\otimes \varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}\\ =&\; \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\underline{\varvec{L}}}\otimes (\underline{\varvec{I}}+\underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{C}}) \varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}\\ =&\; \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\,\underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\underline{\varvec{L}}}\otimes \varvec{q}\varvec{p}^\top , \end{aligned} \end{aligned}$$

where at the last step we have used the definitions of \(\varvec{q}\) and \(\varvec{p}^\top \) in (7.13a) and (7.14b). This completes the verification of (7.22b).

The third identity (7.22c) is verified along the same line. Starting from the definition of \(\varvec{p}(m)^{\top }\) in (7.14b), and substituting the identities (7.19), (7.22a) and (7.21), we have

$$\begin{aligned} \begin{aligned} m\varvec{p}(m)^{\top } =&\; m\varvec{e}_{ms-1}(m)^{\top }\underline{\varvec{S}}(m)^{-1}\underline{\varvec{D}}(m)\\ =&\; (\hat{\varvec{e}}_{m-1}^\top \otimes \varvec{e}_{s-1}^\top ) ( \hat{\underline{\varvec{I}}}\otimes \underline{\varvec{S}}^{-1} + \hat{\underline{\varvec{L}}}\otimes \underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0\varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1} ) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}). \end{aligned} \end{aligned}$$

Expanding the above expression and using (7.24), we have

$$\begin{aligned} \begin{aligned} m\varvec{p}(m)^{\top } =&\; \hat{\varvec{e}}_{m-1}^\top \otimes \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\varvec{e}}_{m-1}^\top \hat{\underline{\varvec{L}}} \otimes \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{C}}\varvec{e}_0 \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}\\ =&\; \hat{\varvec{e}}_{m-1}^\top \otimes \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \hat{\varvec{e}}_{m-1}^\top \hat{\underline{\varvec{L}}} \otimes \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}\\ =&\; (\hat{\varvec{e}}_{m-1}^\top +\hat{\varvec{e}}_{m-1}^\top \hat{\underline{\varvec{L}}}) \otimes \varvec{e}_{s-1}^\top \underline{\varvec{S}}^{-1}\underline{\varvec{D}}= \hat{\varvec{1}}^\top \otimes \varvec{p}^\top , \end{aligned} \end{aligned}$$

where at the last step we have used the second identity in (7.23) and the definition of \(\varvec{p}^\top \) in (7.14b). This proves (7.22c).

The fourth identity (7.22d) is verified similarly. To save the length of this paper, we omit the detailed procedure.

1.2 Some Proofs

In this subsection we would like to prove Lemmas 3.6 and 3.7, as well as Propositions 3.1 and 3.2.

1.2.1 Proof of Lemma 3.6

Recalling the definition of \(\pi _{\kappa ,j}(m;\vartheta )\), given in (7.9), it follows from (3.16) and (3.27) that \(\varTheta (m)=\vartheta +\pi _{00}(m;\vartheta )\). Substituting (7.25) implies that

$$\begin{aligned} \varTheta (m) = \vartheta + \frac{1}{m} \Big (\hat{\varvec{1}}^\top \otimes \varvec{p}^{\top }\Big ) \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}(\vartheta )\Big ) \Big (\hat{\varvec{1}}\otimes \varvec{q}\Big ) = \vartheta + \varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}(\vartheta )\varvec{q}, \end{aligned}$$
(7.27)

where the simple fact \(\hat{\varvec{1}}^\top \hat{\underline{\varvec{I}}}\hat{\varvec{1}}=m\) is used. This completes the proof of Lemma 3.6.

Remark 7.1

Taking \(m=1\) and \(\vartheta =\varTheta \) in (7.27), we use Lemma 3.6 to get

$$\begin{aligned} \varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}(\varTheta )\varvec{q}=0. \end{aligned}$$
(7.28)

This is just the conclusion in Lemma 3.5 with \(m=1\). As an essence property of the averaged numerical flux parameter, it plays an important role in the proof of Lemma 3.7.

1.2.2 Proof of Lemma 3.7

For convenience of notations, in what follows we use a generic notation C to denote a positive constant independent of m. Recalling the proof of [26, Proposition 3.3], we have for \(0\le i,j\le \zeta -1\) that

$$\begin{aligned} \left| b_{ij}^{\star }(m)-\frac{2}{i!j!(i+j+1)} \right| \le \frac{C}{m}, \end{aligned}$$
(7.29)

where \(\{\frac{2}{i!j!(i+j+1)}\}_{0\le i,j\le \zeta -1}\) forms a symmetric positive definite matrix congruent to an Hilbert matrix. Since \(\varTheta >1/2\), it follows from (7.3) and (7.6) with \(\vartheta =\varTheta \) that we can prove this lemma by showing that \(z_{ij}(m;\varTheta )\) for \(0\le i,j\le \zeta -1\) all tends to zero as m goes to infinity. By (7.9), it is sufficient to prove

$$\begin{aligned} \vert \pi _{\kappa ,j}(m;\varTheta )\vert \le \frac{C}{m}, \quad 0\le \kappa , j\le \zeta -1, \end{aligned}$$
(7.30)

since [26, inequality (3.16)] shows that \(\alpha _{i-\kappa }(m)\) is bounded independent of m.

Denote \(\pi _{\kappa ,j}=\pi _{\kappa ,j}(m;\varTheta )\) and \(\underline{\varvec{W}}=\underline{\varvec{W}}(\varTheta )\) for simplicity. Below we are going to prove (7.30) for different cases of \(\kappa \) and j, where (7.28) plays an important role to well control the accumulation and growth as m goes to infinity.

  • If \(\kappa =j=0\), we have \( \pi _{0,0}= (\hat{\varvec{1}}^{\top }\hat{\underline{\varvec{I}}}\hat{\varvec{1}})\otimes (\varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}\varvec{q})=0\), due to (7.28).

  • If \(\kappa >0\) and \(j>0\), we have

    $$\begin{aligned} \pi _{\kappa ,j} = \frac{1}{m} \Big (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }\Big ) [\underline{\varvec{K}}(m)]^{\kappa -1} \varvec{\varPi }_{\kappa ,j}(m) [\underline{\varvec{K}}(m)]^{j-1} \Big (\hat{\varvec{1}}\otimes \varvec{q}\Big ), \end{aligned}$$
    (7.31)

    where \( \varvec{\varPi }_{\kappa ,j}(m) = \underline{\varvec{K}}(m) \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\Big ) \underline{\varvec{K}}(m)\). Substituting (7.22b) into this formula and then using (7.28) to eliminate the term involving \(\hat{\underline{\varvec{L}}}^2\). After some manipulations we yield

    $$\begin{aligned} \begin{aligned} \varvec{\varPi }_{\kappa ,j}(m) =&\; \frac{1}{m^2}\hat{\underline{\varvec{L}}}\otimes [ \varvec{q}\varvec{p}^{\top }\underline{\varvec{D}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}+ \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\varvec{q}\varvec{p}^{\top } ]\\&\; +\frac{1}{m^2}\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}. \end{aligned} \end{aligned}$$

    The row norms for all matrices (including the row vectors and column vectors) do not depend on m, except that \(\Vert \hat{\underline{\varvec{L}}}\Vert _\infty =m-1\). Hence we have

    $$\begin{aligned} \Vert \varvec{\varPi }_{\kappa ,j}(m)\Vert _{\infty }\le \frac{C}{m}. \end{aligned}$$

    Noticing \(\Vert \frac{1}{m} (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top })\Vert _{\infty }\le C\) and \(\Vert \underline{\varvec{K}}(m)\Vert _{\infty }\le C\), we get from (7.31) what we want to prove.

  • If \(\kappa =0\) and \(j>0\), we have \( \pi _{0,j}=\frac{1}{m} \varvec{\varPi }_{0,j}(m)[\underline{\varvec{K}}(m)]^{j-1}(\hat{\varvec{1}}\otimes \varvec{q})\) with

    $$\begin{aligned} \varvec{\varPi }_{0,j}(m)= \Big (\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }\Big ) \Big (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\Big )\underline{\varvec{K}}(m) = \frac{1}{m}\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top } \underline{\varvec{D}}^{-1}\underline{\varvec{W}}\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{D}}, \end{aligned}$$

    by some manipulations with the help of (7.22b) and (7.28). The remaining proof follows the same line as above, hence is omitted.

  • If \(\kappa >0\) and \(j=0\), we have \(\pi _{\kappa ,0}= \frac{1}{m}(\hat{\varvec{1}}^{\top }\otimes \varvec{p}^{\top }) [\underline{\varvec{K}}(m)]^{\kappa -1}\varvec{\varPi }_{\kappa ,0}(m)\), where

    $$\begin{aligned} \varvec{\varPi }_{\kappa ,0}(m) = \underline{\varvec{K}}(m) (\hat{\underline{\varvec{I}}}\otimes \underline{\varvec{D}}^{-1}\underline{\varvec{W}}) (\hat{\varvec{1}}\otimes \varvec{q}) = \hat{\varvec{1}}\otimes \underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{W}}\varvec{q}, \end{aligned}$$

    with the help of (7.22b) and (7.28). Then we can prove (7.31) as above.

Summing up the above conclusions, we verify (7.30) and then prove this lemma.

1.2.3 Proof of Propositions 3.1 and 3.2

Taking \(\vartheta =0\) in (7.27) and substituting the definition of \(\varvec{p}^{\top }\) and \(\varvec{q}\), we have

$$\begin{aligned} \varTheta = \varvec{e}_{s-1}^{\top }\underline{\varvec{S}}^{-1}\underline{\varvec{W}}(0) (\underline{\varvec{I}}+\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{C}})\varvec{e}_0. \end{aligned}$$
(7.32)

This identity will be used to prove these propositions.

Since we have assumed \(c_{\ell \kappa }\ge 0\) for any \(\ell \) and \(\kappa \) in this paper, all entries of \(\underline{\varvec{S}}^{-1}\) are non-negative due to the simple fact

$$\begin{aligned} \underline{\varvec{S}}^{-1}=(\underline{\varvec{I}}-\underline{\varvec{E}}\underline{\varvec{C}})^{-1} =\underline{\varvec{I}}+\sum _{1\le i\le s-1}(\underline{\varvec{E}}\underline{\varvec{C}})^i. \end{aligned}$$

Hence we can conclude from (7.32) that \(\varTheta \) is a non-negative linear combination of the entries of \(\underline{\varvec{W}}(0)=\{d_{\ell \kappa }\theta _{\ell \kappa }\}_{0\le \ell ,\kappa \le s-1}\). As a trivial conclusion for special case that all numerical flux parameters are the same, it is easy to conclude that \(\varTheta \) is a weighted average of \(\theta _{\ell \kappa }\). This proves Proposition 3.1.

Remark 7.2

This is the only place that the condition \(c_{\ell \kappa }\ge 0\) is used in this paper.

For the LWDG method with the time marching coefficients (2.10), we have \(\underline{\varvec{S}}=\underline{\varvec{I}}\) and then get from (7.32) that

$$\begin{aligned} \varTheta =\varvec{e}_{s-1}^{\top }\underline{\varvec{W}}(0)\varvec{e}_0 =d_{s-1,0}\theta _{s-1,0}=\theta _{s-1,0}, \end{aligned}$$

since \(\underline{\varvec{I}}+\underline{\varvec{E}}\underline{\varvec{S}}^{-1}\underline{\varvec{C}}=\underline{\varvec{I}}+\underline{\varvec{E}}\underline{\varvec{C}}=\underline{\varvec{I}}\). This completes the proof of Proposition 3.2.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, Y., Shu, CW. & Zhang, Q. Stability Analysis and Error Estimate of the Explicit Single-Step Time-Marching Discontinuous Galerkin Methods with Stage-Dependent Numerical Flux Parameters for a Linear Hyperbolic Equation in One Dimension. J Sci Comput 100, 64 (2024). https://doi.org/10.1007/s10915-024-02621-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-024-02621-2

Keywords

Mathematics Subject Classification

Navigation