﻿ Finite Dimensional Approximation of the Monodromy Operator of a Periodic Delay Differential Equation with Piecewise Constant Orthonormal Functions

Applied Mathematics
Vol.09 No.11(2018), Article ID:88878,23 pages
10.4236/am.2018.911086

Finite Dimensional Approximation of the Monodromy Operator of a Periodic Delay Differential Equation with Piecewise Constant Orthonormal Functions

Automatic Control Department, CINVESTAV-IPN, Mexico City, Mexico    Received: October 20, 2018; Accepted: November 27, 2018; Published: November 30, 2018

ABSTRACT

Using piecewise constant orthonormal functions, an approximation of the monodromy operator of a Linear Periodic Delay Differential Equation (PDDE) is obtained by approximating the integral equation corresponding to the PDDE as a linear operator over the space of initial conditions. This approximation allows us to consider the state space as finite dimensional resulting in a finite matrix approximation whose spectrum converges to the spectrum of the monodromy operator.

Keywords:

Monodromy Operator, Periodic Delay Differential Equations, Walsh Functions, Block Pulse Functions, Finite Rank Approximation 1. Introduction

Linear Periodic Differential Equations (PDDEs) have been of importance for studying problems of vibration, mechanics, astronomy, electric circuits, biology among others in  several examples of delay effects on mechanical systems are given, in  and  effects of the delay in physics and biological processes are considered. Neglecting the fact that interaction between particles does not occur instantaneously sometimes is no longer possible or practical, these finite velocity interactions bring new behaviors that modify significantly the behavior of the system, see for example  and  . In the study of these Delay Differential Equations many problems arise, mainly due to the infinite dimensional nature of the system, in the case of linear PDDEs the stability depends on the spectrum of the monodromy operator, which in the non delayed case corresponds to the monodromy matrix.

Approximation methods of the monodromy operator have been proposed a number of times,  and  make use of pseudospectral collocation methods to approximate the monodromy operator. The well known method of semidis- cretization  has also been used to determine the stability of a PDDE. In  using a Walsh approximation method from  a set of approximated solutions of a PDDE was used to construct an approximation of the monodromy operator by numerical integration.

In this work, the main contribution is an approximation of the monodromy operator of the PDDE by a linear Equation (35) of the form $x={\mathcal{U}}_{k}{x}_{0}$, where the directly obtained matrix ${\mathcal{U}}_{k}$ will correspond to the approximated monodromy operator, with no need of approximating solutions or numerical integration. Stability of the PDDE can then be determined by the spectrum of ${\mathcal{U}}_{k}$ without any need of solving any equation. Convergence of ${\mathcal{U}}_{k}$ and its spectrum is stated in Theorems 10, 14 and 15. This approximation is made by projecting the integral equation corresponding to the PDDE in to a a finite dimensional subspace spanned by finitely many piecewise constant functions. The utilized functions must belong to a complete set of piecewise constant orthonormal functions with discontinuities only in dyadic rationals, such as Haar, Walsh or Block Pulse Functions (BPF). The theoretical framework of this paper will be based on Walsh functions since most results are stated for these functions. Once obtained the finite dimensional approximation of the monodromy operator will be stated in terms of BPF to reduce the computational cost of the stability analysis.

The main goal of this paper is to provide a computationally light method, with straightforward implementation, to approximate the monodromy operator of a delay differential equation, in order to facilitate the computation of stability diagrams used to study the behavior of the equation with respect to changes in its parameters.

2. Linear Periodic Delay Differential Equations

Consider the linear PDDE:

$\stackrel{˙}{x}\left(t\right)=A\left(t\right)x\left(t\right)+B\left(t\right)x\left(t-\tau \right),$ (1)

where $x\left(t\right)\in {ℝ}^{n}$, $t,\tau \in {ℝ}_{+}$, $0<\tau \le \omega$, $A\left(t\right),B\left(t\right)$ are $n×n$ matrices of ω-periodic functions continuous on $\left[0,1\right]$. Denote as $\mathcal{C}$ the space of conti- nuous functions from $\left[-\tau ,0\right]$ into ${ℝ}^{n}$, this space is a Banach space with the norm $‖\phi ‖=\underset{-\tau \le \theta \le 0}{max}‖\phi \left(\theta \right)‖$  .

A solution of (1) with initial condition $\phi \in \mathcal{C}$ at ${t}_{0}$ is understood as a mapping $\xi :\left[-\tau ,0\right]×ℝ×C\to {ℝ}^{n}$, such that $\xi \left(\theta ,{t}_{0},\phi \right)=\phi \left(\theta \right)$ for $-\tau \le \theta \le 0$ and that $\xi \left(\theta ,t,\phi \right)=x\left(t+\theta \right)$ for $-\tau \le \theta \le 0$ and $t+\theta \ge {t}_{0}$, where $x\left(t\right)$ sa- tisfies (1) for $t\ge {t}_{0}$ with $x\left(\theta \right)=\phi \left(\theta \right)$ for $\theta \in \left[-\tau ,0\right]$. At time t a solution will be an element of space C. Explicitly we will have:

$\xi \left(\theta ,t,\xi \left(\theta ,{t}_{0},\phi \right)\right)=T\left({t}_{0},{t}_{0}+t\right)\xi \left(\theta ,{t}_{0},\phi \right),$ (2)

with $\theta \in \left[-\tau ,0\right]$. The operator T is called the solution mapping and is analogous to the state transition matrix of the undelayed case.

2.1. Monodromy Operator

Taking into account the periodic nature of (1) it is relevant to consider (2) when $t={t}_{0}+\omega$, in this case we will denote the solution mapping as $U\left({t}_{0}\right)\triangleq T\left({t}_{0},{t}_{0}+\omega \right)$.

Next some properties consequence of $U\left(\cdot \right)$ being completely continuous are enlisted,  :

• The spectrum $\sigma \left(U\left({t}_{0}\right)\right)$ is a countable compact set in the complex plane.

$0\in \sigma \left(U\left({t}_{0}\right)\right)$, and if $\lambda \in \sigma \left(U\left({t}_{0}\right)\right)$ then $\left(U\phi \right)\left({t}_{0}\right)=\lambda \phi$ has a nonzero solution $\phi \in C$.

• If the cardinality of the set $\sigma \left(U\left({t}_{0}\right)\right)$ is infinite then the only limit point of $\sigma \left(U\left({t}_{0}\right)\right)$ is 0.

• The cardinality of $\sigma \left(U\left({t}_{0}\right)\right)$ outside any disc ${R}_{r}$ of radius r, ${R}_{r}=\left\{z\in ℂ,|z|\le r\right\}$ is finite.

$\sigma \left(U\left({t}_{0}\right)\right)=\sigma \left(U\left({t}_{1}\right)\right)$ $\forall {t}_{0},{t}_{1}\in ℝ$.

$U\left(0\right)$ henceforth denoted simply as U will be called the monodromy opera- tor of (1) and the elements $\lambda \in \sigma \left(U\right)$, $\lambda \ne 0$ will be called characteristic multipliers.

2.2. Stability

It was shown  that Floquet theory is valid for PDDEs in a certain sense, and that stability of (1) will be related to the spectrum of U. The following theorem from  establishes the conditions for the stability of (1):

Theorem 1. If the characteristic multipliers are situated inside the unit circle $\left\{\lambda ,‖\lambda ‖<1\right\}$, then the zero solution of the system is uniformly asymptotically stable. If the multipliers of the system are inside the closed unit circle, and if multipliers situated on the unit circumference correspond to simple elementary divisors then the zero solution is uniformly stable.

Remark 1. If $\tau >\omega$ all the above statements are valid for ${U}^{r}$, where $r\omega \ge \tau$ and $r\in ℤ$  .

Remark 2. Note that Theorem 1 extends the properties of an undelayed case   .

3. Walsh Functions

3.1. Definition

Walsh functions are a set of piecewise constant complete orthonormal functions introduced in  and defined on the interval $\left[0,1\right)$, although easily translated to any other interval. Formal definition of the Walsh functions may be done in many ways  , with the use of Rademacher functions the l-th Walsh function may be defined as  :

${w}_{\mathcal{l}}\left(t\right)=\underset{i=0}{\overset{k}{\prod }}{\left[\text{sign}\left(\mathrm{sin}{2}^{i-1}\text{π}t\right)\right]}^{{\epsilon }_{i}},$ (3)

where the ${\epsilon }_{i}$ are the coefficients of the unique binary expansion of $\mathcal{l}$, namely

$\mathcal{l}=\underset{i=0}{\overset{k}{\sum }}\text{ }\text{ }{\epsilon }_{i}{2}^{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{with}\text{\hspace{0.17em}}{\epsilon }_{i}\in \left\{0,1\right\}$ (4)

There are as well many ways of ordering the Walsh functions, in this paper we will use the so called Dyadic ordering  . Figure 1 shows the first eight Walsh functions in this ordering.

To a function $f\in {L}_{\left[0,1\right]}^{2}$ will correspond the Walsh approximation:

$f\left(t\right)~\underset{i=0}{\overset{n}{\sum }}\text{ }{h}_{i}{w}_{i}\left(t\right),$ (5)

where ${h}_{i}={\int }_{0}^{1}\text{ }f\left(t\right){w}_{i}\left(t\right)\text{d}t.$ The right side of (5) will converge in norm to f for any $f\in {L}_{\left[0,1\right]}^{2}$ and if $n={2}^{k}$, it will converge uniformly for any continuous function that has, at most, jump discontinuities at dyadic rationals (numbers of the form $\frac{a}{{2}^{b}},a\in ℕ,b\in ℕ$ )  . When $n=\infty$, (5) will be called a Walsh expan- sion.

3.2. Properties of Walsh Functions

We will define a Walsh matrix $W\left[k\right]$ as the ${2}^{k}×{2}^{k}$ matrix consisting on the discretization values of the Walsh functions over the interval $\left[0,1\right]$.

For example we will have for $k=3$ :

$W\left[4\right]=\left[\begin{array}{rrrr}\hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill -1& \hfill -1\\ \hfill 1& \hfill -1& \hfill 1& \hfill -1\\ \hfill 1& \hfill -1& \hfill -1& \hfill 1\end{array}\right].$

We can also define the Delayed Walsh Matrix $W\left[-m,k\right]$ as a Walsh matrix of order ${2}^{k}$ shifted m columns to the right and with the first m columns being zeros. In the same manner we define a Forwarded Walsh Matrix $W\left[+m,k\right]$, but shifted to the left:

Figure 1. Walsh functions in dyadic ordering.

$W\left[-2,4\right]=\left[\begin{array}{rrrr}\hfill 0& \hfill 0& \hfill 1& \hfill 1\\ \hfill 0& \hfill 0& \hfill 1& \hfill 1\\ \hfill 0& \hfill 0& \hfill 1& \hfill -1\\ \hfill 0& \hfill 0& \hfill 1& \hfill -1\end{array}\right];\text{ }W\left[+2,4\right]=\left[\begin{array}{rrrr}\hfill 1& \hfill 1& \hfill 0& \hfill 0\\ \hfill -1& \hfill -1& \hfill 0& \hfill 0\\ \hfill 1& \hfill -1& \hfill 0& \hfill 0\\ \hfill -1& \hfill 1& \hfill 0& \hfill 0\end{array}\right]$ (6)

We define the vector consisting on the first ${2}^{k}$ Walsh functions as:

${\stackrel{¯}{w}}_{k}\left(t\right)={\left[{w}_{0}\left(t\right)\text{\hspace{0.17em}}{w}_{1}\left(t\right)\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{w}_{{2}^{k}-1}\left(t\right)\right]}^{\text{T}}.$ (7)

Since the order of the approximation should be clear from context we write simply $\stackrel{¯}{w}\left(t\right)$. We define as well the dyadic sum $\oplus$ between tho nonnegative integers as:

$q\oplus r=\underset{i=0}{\overset{k}{\sum }}|{q}_{i}-{r}_{i}|{2}^{i},$ (8)

where ${q}_{i}$ and ${r}_{i}$ are respectively the coefficients of the binary expansions of q and r as in (4). To a vector $\sigma ={\left[{\sigma }_{0}\text{\hspace{0.17em}}{\sigma }_{2}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{\sigma }_{{2}^{k}-1}\right]}^{\text{T}}$ we associate a ${2}^{k}×{2}^{k}$ sym- metric matrix $\Lambda \left(\sigma \right)$ whose i, jth entry will be given by ${\sigma }_{i-1\oplus j-1}$. If for example $\sigma ={\left[{\sigma }_{0}\text{\hspace{0.17em}}{\sigma }_{1}\text{\hspace{0.17em}}{\sigma }_{2}\text{\hspace{0.17em}}{\sigma }_{3}\right]}^{\text{T}}$, then:

$\Lambda \left(\sigma \right)=\left[\begin{array}{cccc}{\sigma }_{0}& {\sigma }_{1}& {\sigma }_{2}& {\sigma }_{3}\\ {\sigma }_{1}& {\sigma }_{0}& {\sigma }_{3}& {\sigma }_{2}\\ {\sigma }_{2}& {\sigma }_{3}& {\sigma }_{0}& {\sigma }_{1}\\ {\sigma }_{3}& {\sigma }_{2}& {\sigma }_{1}& {\sigma }_{0}\end{array}\right].$

We make use of the following Lemma:

Lemma 2.  Let $\sigma \in {ℝ}^{{2}^{k}}$, then:

$\stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right)\sigma =\Lambda \left(\sigma \right)\stackrel{¯}{w}\left(t\right).$ (9)

The integral of a Walsh vector can be approximated by a Walsh expansion of the form:

${\int }_{0}^{t}\text{ }\stackrel{¯}{w}\left(s\right)\text{d}s=P\stackrel{¯}{w}\left(t\right),$ (10)

where P is given by  : (11)

The approximated integral converges to the exact integral uniformly  .

To a function matrix $A\left(t\right)$ we associate the Walsh approximation $\stackrel{^}{A}\left(t\right)$ :

$\stackrel{^}{A}\left(t\right)=\alpha \Omega \left(t\right),$ (12)

where

$\alpha =\left[\begin{array}{cccc}{\stackrel{¯}{\alpha }}_{11}^{\text{T}}& {\stackrel{¯}{\alpha }}_{12}^{\text{T}}& \cdots & {\stackrel{¯}{\alpha }}_{1n}^{\text{T}}\\ {\stackrel{¯}{\alpha }}_{21}^{\text{T}}& {\stackrel{¯}{\alpha }}_{22}^{\text{T}}& \cdots & {\stackrel{¯}{\alpha }}_{2n}^{\text{T}}\\ ⋮& ⋮& \ddots & ⋮\\ {\stackrel{¯}{\alpha }}_{n1}^{\text{T}}& {\stackrel{¯}{\alpha }}_{n2}^{\text{T}}& \cdots & {\stackrel{¯}{\alpha }}_{nn}^{\text{T}}\end{array}\right],\text{\hspace{0.17em}}\Omega \left(t\right)=\left[\begin{array}{cccc}\stackrel{¯}{w}\left(t\right)& 0& \cdots & 0\\ 0& \stackrel{¯}{w}\left(t\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \stackrel{¯}{w}\left(t\right)\end{array}\right],$ (13)

and each ${\stackrel{¯}{\alpha }}_{i,j}$ is the vector of coefficients of the Walsh approximation of each entry of $A\left(t\right)$.

Let $A\left(t\right)$ be a $n×n$ function matrix with Walsh approximation $\stackrel{^}{A}\left(t\right)$ and $f\left(t\right)$ a function vector with Walsh approximation $\stackrel{^}{f}\left(t\right)$, whose elements are of the form ${f}_{i}\left(t\right)={h}_{i}^{\text{T}}\stackrel{¯}{w}\left(t\right)$, $i=1,\cdots ,n$ where ${h}_{i}\in {ℝ}^{{2}^{k}}$ is the vector of Walsh coefficients of the approximation of each element of f. Define $H={\left[\begin{array}{ccc}{h}_{1}^{\text{T}}& \cdots & {h}_{n}^{\text{T}}\end{array}\right]}^{\text{T}}$, then from (9) we have:

$\begin{array}{c}\stackrel{^}{A}\left(t\right)\stackrel{^}{f}\left(t\right)=\alpha \Omega \left(t\right)\left[\begin{array}{c}{\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{1}\\ {\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{2}\\ ⋮\\ {\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{n}\end{array}\right]=\alpha \left[\begin{array}{c}\stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{1}\\ \stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{2}\\ ⋮\\ \stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){h}_{n}\end{array}\right]\\ =\left[\begin{array}{c}\underset{r=1}{\overset{n}{\sum }}{\left({h}_{r}^{\text{T}}\stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){\stackrel{¯}{\alpha }}_{1,r}\right)}^{\text{T}}\\ \underset{r=1}{\overset{n}{\sum }}{\left({h}_{r}^{\text{T}}\stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){\stackrel{¯}{\alpha }}_{2,r}\right)}^{\text{T}}\\ ⋮\\ \underset{r=1}{\overset{n}{\sum }}{\left({h}_{r}^{\text{T}}\stackrel{¯}{w}\left(t\right){\stackrel{¯}{w}}^{\text{T}}\left(t\right){\stackrel{¯}{\alpha }}_{n,r}\right)}^{\text{T}}\end{array}\right]=\left[\begin{array}{c}\underset{r=1}{\overset{n}{\sum }}\text{ }{\stackrel{¯}{w}}^{\text{T}}\left(t\right)\Lambda \left({\stackrel{¯}{\alpha }}_{1,r}\right){h}_{r}\\ \underset{r=1}{\overset{n}{\sum }}\text{ }{\stackrel{¯}{w}}^{\text{T}}\left(t\right)\Lambda \left({\stackrel{¯}{\alpha }}_{2,r}\right){h}_{r}\\ ⋮\\ \underset{r=1}{\overset{n}{\sum }}\text{ }{\stackrel{¯}{w}}^{\text{T}}\left(t\right)\Lambda \left({\stackrel{¯}{\alpha }}_{n,r}\right){h}_{r}\end{array}\right]\\ ={\Omega }^{\text{T}}\left(t\right)\stackrel{¯}{\Lambda }\left(\alpha \right)H,\end{array}$ (14)

where:

$\stackrel{¯}{\Lambda }\left(\alpha \right)=\left[\begin{array}{cccc}\Lambda \left({\stackrel{¯}{\alpha }}_{11}\right)& \Lambda \left({\stackrel{¯}{\alpha }}_{12}\right)& \cdots & \Lambda \left({\stackrel{¯}{\alpha }}_{1n}\right)\\ \Lambda \left({\stackrel{¯}{\alpha }}_{21}\right)& \Lambda \left({\stackrel{¯}{\alpha }}_{22}\right)& \cdots & \Lambda \left({\stackrel{¯}{\alpha }}_{2n}\right)\\ ⋮& ⋮& \ddots & ⋮\\ \Lambda \left({\stackrel{¯}{\alpha }}_{n1}\right)& \Lambda \left({\stackrel{¯}{\alpha }}_{n2}\right)& \cdots & \Lambda \left({\stackrel{¯}{\alpha }}_{nn}\right)\end{array}\right].$ (15)

3.3. Approximation Error

Regarding the error of Walsh approximation we have:

Lemma 3.  If f satisfies the Lipschitz condition, then:

${E}_{k}={‖\underset{i=0}{\overset{p={2}^{k}}{\sum }}{c}_{i}{w}_{i}\left(t\right)-f\left(t\right)‖}_{\infty }\le C{2}^{-k},$ (16)

for some constant C.

3.4. Block Pulse Functions

The set of Block pulse functions of order p is defined on the interval $\left[0,1\right)$ as the set $\left\{{\psi }_{0}\left(t\right),\cdots ,{\psi }_{m-1}\left(t\right)\right\}$, where  :

${\psi }_{i}\left(t\right)=\left\{\begin{array}{l}1,\text{ }t\in \left[\frac{i}{p},\frac{i+1}{p}\right)\\ 0,\text{ }\text{otherwise}\end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,p-1.$ (17)

Block Pulse Functions, shown on Figure 2, are orthogonal, easily normalizable  and when $m\to \infty$ they form a complete set  . We restrict ourselves to

Figure 2. Block pulse functions.

the case $p={2}^{k}$ for k integer. Walsh functions and Block Pulse Functions are related by a one on one correspondence, meaning that there exists a unique bijective linear transformation that maps the first ${2}^{k}$ Walsh functions on to the set of Block Pulse Functions of order ${2}^{k}$  . The existence of this transfor- mation and the completeness of the Block Pulse functions ensure that the properties of Walsh functions are inherited to Block Pulse Functions. In particular we have the following:

Lemma 4.  Matrix P in (10) is similar to the upper triangular matrix ${P}_{*}\triangleq W\left[k\right]PW\left[k\right]=\frac{1}{2}{I}_{{2}^{k}}+Q+{Q}^{2}+\cdots +{Q}^{{2}^{k}-1}$, where Q is a nilpotent matrix

with ones above the diagonal and zeros everywhere else:

${P}_{*}=\left[\begin{array}{cccc}\frac{1}{2}& 1& \cdots & 1\\ 0& \frac{1}{2}& \cdots & 1\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \frac{1}{2}\end{array}\right].$ (18)

Lemma 5.  Let $\stackrel{^}{f}\left(t\right)={\sigma }^{\text{T}}\stackrel{¯}{w}\left(t\right)$ be the Walsh approximation of the function $f\left(t\right)$, with $\sigma \in {ℝ}^{{2}^{k}}$ being the vector of coefficients of the Walsh approximation. Then $\Lambda \left(\sigma \right)$ is similar to a diagonal matrix:

$\begin{array}{c}{\Lambda }_{*}\left(\sigma \right)={W}^{-1}\left[k\right]\Lambda \left(\sigma \right)W\left[k\right]\\ =\left[\begin{array}{cccc}\stackrel{^}{f}\left(\frac{0}{{2}^{k}}\right)& 0& \cdots & 0\\ 0& \stackrel{^}{f}\left(\frac{1}{{2}^{k}}\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \stackrel{^}{f}\left(\frac{{2}^{k}-1}{{2}^{k}}\right)\end{array}\right].\end{array}$ (19)

4. Approximation of the Monodromy Operator

Without loss of generality we assume $\omega =1$. We approximate the monodromy operator by projecting (1) on a finite dimensional subspace of ${L}^{2}\left[0,1\right)$ formed by the span of ${2}^{k}$ piecewise constant orthonormal functions. We will assume the orthonormal functions to be Walsh functions, the analysis can be carried to the case of Block Pulse Functions or Haar functions by means of similarity transformations. We also restrict ourselves to the case of commensurable delays, that is $m\tau =\omega$ for some $m\in ℕ$.

Integrating (1) from 0 to t we will have:

$x\left(t\right)-x\left(0\right)={\int }_{0}^{t}A\left(s\right)x\left(s\right)\text{d}s+{\int }_{0}^{t}B\left(s\right)x\left(s-\tau \right)\text{d}s,$ (20)

with $x\left(\theta \right)=\phi \left(\theta \right)$ for $\theta \in \left[-\tau ,0\right]$. A solution of (20) will correspond to a solution of (1)  . Let ${\pi }_{k}$ denote the projection mapping that takes the Walsh expansion $f\left(t\right)={\sum }_{\mathcal{l}=0}^{\infty }\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$ to the Walsh approximation

$\stackrel{^}{f}\left(t\right)={\sum }_{\mathcal{l}=0}^{{2}^{k}}\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$. Along with (20) we introduce its projection onto the space

$M=span{\left\{{w}_{0},{w}_{1},\cdots ,{w}_{{2}^{k}-1}\right\}}^{n}$ :

$\stackrel{^}{x}\left(t\right)-x\left(0\right)={\pi }_{k}{\int }_{0}^{t}\stackrel{^}{A}\left(s\right)\stackrel{^}{x}\left(s\right)\text{d}s+{\pi }_{q}{\int }_{0}^{t}\stackrel{^}{B}\left(s\right)\stackrel{^}{x}\left(s-\tau \right)\text{d}s,$ (21)

where $\stackrel{^}{x}\left(t\right)\in M$ and $\phi \left(0\right)\in M$ since it is a constant. $\stackrel{^}{A}\left(t\right)$ and $\stackrel{^}{B}\left(t\right)$ correspond to ${\pi }_{k}A\left(t\right)$ and ${\pi }_{k}B\left(t\right)$, this is, the approximations of $A\left(t\right)$ and $B\left(t\right)$, respectively, as in (12). $\stackrel{^}{x}\left(x\right)\left(\theta \right)$ is still not defined for $\theta \in \left[-\tau ,0\right)$, we cant define yet the projection of the initial condition since its defined on a different domain than the domain of definition of the Walsh functions. For this we have:

Proposition 6. The value of the Walsh approximation of order ${2}^{k}$ at an interval $\left[\frac{i}{{2}^{k}},\frac{i+1}{{2}^{k}}\right)$, $i=0,\cdots ,{2}^{k}-1$, depends only on the value of the function at that same interval.

Proof. It follows immediately from Theorem 2.1.3 in  .

Thus we define the projection ${\pi }_{k}\phi \left(\theta \right)={\sum }_{\mathcal{l}=0}^{{2}^{k}-1}{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(\theta +\omega \right)\triangleq \stackrel{^}{\phi }\left(\theta \right)$ as the Walsh approximation of order ${2}^{k}$ of an integrable function ${\phi }^{*}\left(\theta \right)$ defined on $\left[-\omega ,0\right)$, that is equal to $\phi \left(\theta \right)$ in $\left[-\tau ,0\right)$ and 0 everywhere else, and we make $\stackrel{^}{x}\left(\theta \right)=\stackrel{^}{\phi }\left(\theta \right)$ for $\theta \in \left[-\tau ,0\right]$. Since Walsh functions were not defined at $t=1$

we simply set $\stackrel{^}{\phi }\left(0\right)=\stackrel{^}{\phi }\left(-\frac{1}{{2}^{k}}\right)$.

We split the second integral in (20) as:

$\begin{array}{l}{\int }_{0}^{t}B\left(s\right)x\left(s-\tau \right)\text{d}s\\ ={\int }_{0}^{\tau }B\left(s\right)x\left(s-\tau \right)\text{d}s+{\int }_{\tau }^{t}B\left(s\right)x\left(s-\tau \right)\text{d}s\\ ={\int }_{0}^{\tau }B\left(s\right)\phi \left(s-\tau \right)\text{d}s+{\int }_{\tau }^{t}B\left(s\right)x\left(s-\tau \right)\text{d}s\\ ={\int }_{0}^{t}\left(1-S\left(s-\tau \right)\right)B\left(s\right)\phi \left(s-\tau \right)\text{d}s+{\int }_{0}^{t}S\left(s-\tau \right)B\left(s\right)x\left(s-\tau \right)\text{d}s,\end{array}$ (22)

where $S\left(t\right)$ is the unit step function.

We make use of the following Lemma:

Lemma 7.  Any function $f\left(t\right)$ constant on the intervals of the form $\left[\frac{i}{{2}^{k}},\frac{i+1}{{2}^{k}}\right)$, $0\le i\le {2}^{k}-1$ can be represented in the form:

$f\left(t\right)=\underset{\mathcal{l}=0}{\overset{{2}^{k}-1}{\sum }}{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right),$ (23)

i.e. the non zero coefficients of the Walsh expansion of $f\left(t\right)$ have indices no greater than ${2}^{k}-1$. Moreover, this representation is unique.

From linearity of ${\pi }_{k}$ and from Lemma 7. we have that we can split the second integral in (21) as in (22) with this being consistent with the projection:

$\begin{array}{l}{\pi }_{k}{\int }_{0}^{t}\text{ }\stackrel{^}{B}\left(s\right)\stackrel{^}{x}\left(s-\tau \right)\text{d}s\\ ={\pi }_{k}{\int }_{0}^{t}\left(1-S\left(s-\tau \right)\right)\stackrel{^}{B}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)\text{d}s+{\pi }_{k}{\int }_{0}^{t}\text{ }S\left(s-\tau \right)\stackrel{^}{B}\left(s\right)\stackrel{^}{x}\left(s-\tau \right)\text{d}s,\end{array}$ (24)

Since $\stackrel{^}{x}\left(t\right)\in M$ we have that $\stackrel{^}{x}\left(t\right)={\left[{h}_{1}^{\text{T}}\stackrel{¯}{w}\left(t\right),\cdots ,{h}_{n}\stackrel{¯}{w}\left(t\right)\right]}^{\text{T}}$, we also have that $\stackrel{^}{\phi }\left(\theta \right)={\left[{c}_{1}^{\text{T}}\stackrel{¯}{w}\left(\theta +\omega \right),\cdots ,{c}_{n}\stackrel{¯}{w}\left(\theta +\omega \right)\right]}^{\text{T}}$ with each ${h}_{i}$ and ${c}_{i}$ being vectors of Walsh coefficients. Defining $H={\left[{h}_{1}^{\text{T}},\cdots ,{h}_{n}^{\text{T}}\right]}^{\text{T}}$, $\Phi ={\left[{c}_{1}^{\text{T}},\cdots ,{c}_{n}^{\text{T}}\right]}^{\text{T}}$, and recalling (7) and (12), we can write (21) as:

$\begin{array}{l}{\Omega }^{\text{T}}\left(t\right)H-\stackrel{^}{x}\left(0\right)\\ ={\pi }_{q}{\int }_{0}^{t}\text{ }\alpha \text{ }\Omega \left(s\right){\Omega }^{\text{T}}\left(s\right)H\text{d}s+{\pi }_{k}{\int }_{0}^{t}\text{ }\beta \text{ }\Omega \left(s\right)S\left(s-\tau \right){\Omega }^{\text{T}}\left(s-\tau \right)H\text{d}s\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+{\pi }_{k}{\int }_{0}^{t}\text{ }\beta \text{ }\Omega \left(s\right)\left(1-S\left(s-\tau \right)\right){\Omega }^{\text{T}}\left(s-\tau +\omega \right)\Phi \text{d}s,\end{array}$ (25)

From Lemma 7. we have that both, $S\left(t-\tau \right){\Omega }^{\text{T}}\left(t-\tau \right)$ and $\left(1-S\left(t-\tau \right)\right){\Omega }^{\text{T}}\left(s-\tau +\omega \right)$, can be uniquely represented in terms of $\Omega \left(t\right)$ :

$S\left(t-\tau \right){\Omega }^{\text{T}}\left(t-\tau \right)={\Omega }^{\text{T}}\left(t\right){\stackrel{¯}{W}}_{D},$

$\left(1-S\left(t-\tau \right)\right){\Omega }^{\text{T}}\left(s-\tau +\omega \right)={\Omega }^{\text{T}}\left(t\right){\stackrel{¯}{W}}_{F},$ (26)

${\stackrel{¯}{W}}_{F}$ and ${\stackrel{¯}{W}}_{D}$ are ${2}^{k}n×{2}^{k}n$ block diagonal matrices, whose diagonal entries are given respectively by ${W}_{F}^{\text{T}}$ and ${W}_{D}^{\text{T}}$, which satisfy ${W}_{F}\stackrel{¯}{w}\left(t\right)=\left(1-S\left(t-\tau \right)\right)w\left(s-\tau +\omega \right)$ and ${W}_{D}\stackrel{¯}{w}\left(t\right)=S\left(t-\tau \right)\stackrel{¯}{w}\left(t-\tau \right)$ When ap- proximating by Walsh functions, matrix ${W}_{D}$ is given by ${W}_{D}=\frac{1}{{2}^{k}}\left(W\left[-m,k\right]W\left[k\right]\right)$ and it is called the Walsh Shift Operator  . ${W}_{F}$ is given in an analogous way as ${W}_{F}=\frac{1}{{2}^{k}}\left(W\left[+\left(1-m\right),k\right]W\left[k\right]\right)$. The term $\stackrel{^}{x}\left(0\right)$ will also have an unique representation:

$\stackrel{^}{x}\left(0\right)={\Omega }^{\text{T}}\left(t\right){\stackrel{¯}{W}}_{C}\Phi ,$ (27)

since $\stackrel{^}{x}\left(0\right)=\stackrel{^}{\phi }\left(0\right)$. ${\stackrel{¯}{W}}_{C}$ will be again a ${2}^{k}n×{2}^{k}n$ block diagonal matrix, whose diagonal entries are given by ${W}_{C}^{\text{T}}$. In the case of Walsh functions, matrix ${W}_{C}$ evaluates $\varphi \left(t\right)$ at time ${2}^{k}-1$ and assigns its value to the coefficient of the constant Walsh function ${w}_{0}\left(t\right)$, hence ${W}_{C}^{\text{T}}={\left[\stackrel{¯}{w}\left({2}^{k}-1\right),0,\cdots ,0\right]}^{\text{T}}$.

From (14), (27) and (26) we have that we can write (25) as:

$\begin{array}{l}{\Omega }^{\text{T}}\left(t\right)H-{\Omega }^{\text{T}}\left(t\right){\stackrel{¯}{W}}_{C}\Phi \\ ={\pi }_{k}{\int }_{0}^{t}\text{ }{\Omega }^{\text{T}}\left(s\right)\text{d}s\stackrel{¯}{\Lambda }\left(\alpha \right)H+{\pi }_{k}{\int }_{0}^{t}\text{ }{\Omega }^{\text{T}}\left(s\right)\text{d}s\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{F}\Phi \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+{\pi }_{k}{\int }_{0}^{t}\text{ }{\Omega }^{\text{T}}\left(s\right)\text{d}s\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{D}H.\end{array}$ (28)

From (10) we have that:

${\pi }_{k}{\int }_{0}^{t}\text{ }{\Omega }^{\text{T}}\left(s\right)\text{d}s={\Omega }^{\text{T}}\left(t\right)\stackrel{¯}{P},$ (29)

where $\stackrel{¯}{P}$ is a ${2}^{k}n×{2}^{k}n$ matrix given by:

$\stackrel{¯}{P}=\left[\begin{array}{cccc}{P}^{\text{T}}& 0& \cdots & 0\\ 0& {P}^{\text{T}}& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & {P}^{\text{T}}\end{array}\right],$ (30)

from which we arrive at:

$\begin{array}{l}{\Omega }^{\text{T}}\left(t\right)H-{\Omega }^{\text{T}}\left(t\right){\stackrel{¯}{W}}_{C}\Phi \\ ={\Omega }^{\text{T}}\left(t\right)\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\alpha \right)H{\Omega }^{\text{T}}\left(t\right)+\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{F}\Phi +{\Omega }^{\text{T}}\left(t\right)\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{D}H.\end{array}$ (31)

Since $\Omega \left(t\right)$ is nonsingular we have:

$H={\left[I-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\alpha \right)-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{D}\right]}^{-1}\left[{\stackrel{¯}{W}}_{C}+\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{F}\right]\Phi .$ (32)

From (32) we can define a mapping ${\mathcal{T}}_{k}:M\to M$ :

${\mathcal{T}}_{k}={\left[I-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\alpha \right)-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{D}\right]}^{-1}\left[{\stackrel{¯}{W}}_{C}+\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{F}\right]$ (33)

However by construction we can see that the domain of ${\mathcal{T}}_{k}$ is in fact the subspace of M consisting of al functions generated by linear combinations of the first ${2}^{k}$ Walsh functions, that are equal to 0 for $t\in \left[0,\omega -\tau \right)$, we denote this subspace as ${M}^{\prime }$. Likewise we can extend the domain of definition of the solution map of (1) from the space ${\mathcal{C}}_{\left[-\tau ,0\right]}$, to the subspace of ${L}_{\left[-\omega ,0\right]}^{2}$ consisting of al functions that are continuous on $\left[-\tau ,0\right]$ that are equal to 0 for $t\in \left[-\omega ,\tau \right)$, this space, denoted ${\mathcal{C}}^{\prime }$ is isomorphic to the space ${\mathcal{C}}_{\left[-\tau ,0\right]}$ and its projection on the space M corresponds to ${M}^{\prime }$. We can say now that ${\mathcal{T}}_{k}$ is an approximation of the solution mapping T of (1), we obtain an approximated solution $\stackrel{^}{x}\left(t\right)={\Omega }^{\text{T}}\left(t\right)H$ which satisfies (21), and thus, we have an approximation of the solution map of (1).

In order to study the monodromy operator of (1), we must study the state at $t=\omega$, this is, we must know the solution of (1) for $t\in \left[\omega -\tau ,\omega \right]$ corresponding to an initial condition. Since ${M}^{\prime }$ is the projection in the space M of ${\mathcal{C}}^{\prime }$ and since ${\mathcal{C}}^{\prime }$ is isomorphic to ${\mathcal{C}}_{\left[\omega -\tau ,\omega \right]}$, the approximation of the state at $t=\omega$ will be given by the projection of the approximated solution into ${M}^{\prime }$. Hence, by Lemma 7 we will have:

${H}^{\prime }={\stackrel{¯}{W}}_{P}H.$ (34)

where ${\stackrel{¯}{W}}_{p}$ is a ${2}^{k}n×{2}^{k}n$ is a block diagonal matrix which projects the approximated solution ${\Omega }^{\text{T}}\left(t\right)H$ into the subspace ${M}^{\prime }$, with diagonal entries ${W}_{P}^{\text{T}}$. In the case of Walsh functions, we will have

${W}_{P}^{\text{T}}=\frac{1}{{2}^{k}}{W}_{S\left(t-\left(1-m\right)\right)}\left[k\right]W\left[k\right]$

where ${W}_{S\left(t-\left(1-m\right)\right)}\left[k\right]$ are given by Walsh matrices of order k that instead have 0s in the first (1 − m)th columns.

From (34) we obtain our approximated monodromy operator, given by:

${\mathcal{U}}_{k}={\stackrel{¯}{W}}_{P}{\left[I-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\alpha \right)-\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{D}\right]}^{-1}\left[{\stackrel{¯}{W}}_{C}+\stackrel{¯}{P}\stackrel{¯}{\Lambda }\left(\beta \right){\stackrel{¯}{W}}_{F}\right].$ (35)

Approximation by Block Pulse Functions

Equation (35) for the approximation of ${\mathcal{U}}_{k}$ is valid for all sets of orthonormal functions that can be obtained by linear combinations of Walsh functions. For the numerical calculation of the approximation of the monodromy operator, Block Pulse Functions are the most advantageous set since with their simplicity comes a lesser computational load. In particular for the Block Pulse Function approximation we will have:

${\stackrel{¯}{W}}_{P}=diag\left\{{W}_{P}^{\text{T}},\cdots ,{W}_{P}^{\text{T}}\right\},$ (36)

${W}_{P}^{\text{T}}=\left[\begin{array}{cc}{0}_{{2}^{k}-m×{2}^{k}-m}& {0}_{{2}^{k}-m×m}\\ {0}_{m×{2}^{k}-m}& {I}_{m×m}\end{array}\right],$

${\stackrel{¯}{W}}_{C}=diag\left\{{W}_{C}^{\text{T}},\cdots ,{W}_{C}^{\text{T}}\right\},$ (37)

${W}_{C}^{\text{T}}=\left[\begin{array}{ccccc}0& 0& \cdots & 0& 1\\ 0& 0& \cdots & 0& 1\\ ⋮& ⋮& \ddots & ⋮& ⋮\\ 0& 0& \cdots & 0& 1\end{array}\right],$

$\stackrel{¯}{P}=diag\left\{{P}^{\text{T}},\cdots ,{P}^{\text{T}}\right\}$ (38)

$P=\frac{1}{{2}^{k}}\left[\begin{array}{cccc}\frac{1}{2}& 1& \cdots & 1\\ 0& \frac{1}{2}& \cdots & 1\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \frac{1}{2}\end{array}\right].$

If we define:

$\begin{array}{cc}{Q}_{-}=\left[\begin{array}{ccccc}0& \cdots & 0& 0& 0\\ 1& \cdots & 0& 0& 0\\ ⋮& \ddots & ⋮& ⋮& ⋮\\ 0& \cdots & 1& 0& 0\\ 0& \cdots & 0& 1& 0\end{array}\right],& {Q}_{+}=\left[\begin{array}{ccccc}0& 1& 0& \cdots & 0\\ 0& 0& 1& \cdots & 0\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& 0& \cdots & 0\\ 0& 0& 0& \cdots & 0\end{array}\right],\end{array}$ (39)

we will have:

${\stackrel{¯}{W}}_{D}=diag\left\{{W}_{D}^{\text{T}},\cdots ,{W}_{D}^{\text{T}}\right\}$ (40)

${W}_{D}^{\text{T}}={Q}_{-}^{m},$

and

${\stackrel{¯}{W}}_{F}=diag\left\{{W}_{F}^{\text{T}},\cdots ,{W}_{F}^{\text{T}}\right\}$ (41)

${W}_{F}^{\text{T}}={Q}_{+}^{{2}^{k}-m}.$

Finally, if in (15) we have $A\left(t\right)=\left[\begin{array}{cccc}{a}_{11}\left(t\right)& {a}_{12}\left(t\right)& \cdots & {a}_{1n}\left(t\right)\\ {a}_{21}\left(t\right)& {a}_{22}\left(t\right)& \cdots & {a}_{2n}\left(t\right)\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{n1}\left(t\right)& {a}_{n2}\left(t\right)& \cdots & {a}_{nn}\left(t\right)\end{array}\right]$, then $\stackrel{¯}{\Lambda }\left(\alpha \right)$ will be given as in (15) with:

$\Lambda \left({\stackrel{¯}{\alpha }}_{ij}\right)=\left[\begin{array}{cccc}{\stackrel{^}{a}}_{ij}\left(\frac{0}{{2}^{k}}\right)& 0& \cdots & 0\\ 0& {\stackrel{^}{a}}_{ij}\left(\frac{1}{{2}^{k}}\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & {\stackrel{^}{a}}_{ij}\left(\frac{{2}^{k}-1}{{2}^{k}}\right)\end{array}\right],$ (42)

where

${\stackrel{^}{a}}_{ij}\left(\frac{i}{{2}^{k}}\right)={\int }_{\frac{i}{{2}^{k}}}^{\frac{i+1}{{2}^{k}}}{a}_{ij}\left(t\right)\text{d}t,\text{ }i=0,1,\cdots ,{2}^{k}-1,$ (43)

with the same for $\stackrel{¯}{\Lambda }\left(\beta \right)$ and $B\left(t\right)$.

5. Convergence of ${\mathcal{U}}_{k}$

We denote as ${\mathcal{C}}_{\Delta }$ the space of functions from $\left[0,1\right]$ to ${ℝ}^{n}$, whose limit from the left exist at every point in $\left(0,1\right]$ and whose limit from the right exists at every point in $\left[0,1\right)$ and that are also continuous at every point that is not a dyadic rational, this is, ${\mathcal{C}}_{\Delta }$ is the space of continuous functions on $\left[0,1\right]$ that may have jump discontinuities at dyadic rationals.

Proposition 8. The space ${\mathcal{C}}_{\Delta }$ is a Banach space with the sup norm.

Proof. Let $\left\{{x}_{n}\right\}$ be a Cauchy sequence in ${\mathcal{C}}_{\Delta }$, then $\forall \epsilon >0$, $\exists N$ such that $n,m\ge N$ implies:

$\underset{t\in \left[0,1\right]}{\mathrm{sup}}‖{x}_{n}\left(t\right)-{x}_{m}\left(t\right)‖<\epsilon .$

Therefore for every ${t}_{0}\in \left[0,1\right]$ we have $‖{x}_{n}\left({t}_{0}\right)-{x}_{m}\left({t}_{0}\right)‖<\epsilon$, for $n,m\ge N$. Thus $\left\{{x}_{n}\left({t}_{0}\right)\right\}$ is a Cauchy sequence in ${ℝ}^{n}$, therefore convergent, we denote such limit as ${x}^{*}\left({t}_{0}\right)$, then we have that for every $\epsilon$, there exists N, such that for $n\ge N$ :

$‖{x}_{n}\left({x}_{0}\right)-{x}^{*}\left({x}_{0}\right)‖\le \epsilon ,\text{ }\forall x\in \left[0,1\right],$

this is $\left\{{x}_{n}\right\}$ converges to ${x}^{*}$ uniformly.

Now let ${t}_{0}\in \left[0,1\right)$, then for every ${x}_{n}$ the limit from the right at ${t}_{0}$ exists, this is, $\forall {x}_{n}\in \left\{{x}_{n}\right\}$, $\exists !{L}_{n}^{+}$ such that for every $\epsilon >0$, there exist ${\delta }_{n}>0$ such that:

$0 (44)

The sequence $\left\{{L}_{n}^{+}\right\}$ is defined on ${ℝ}^{n}$, therefore is convergent if and only if it is Cauchy. We assume by contradiction that $\left\{{L}_{n}^{+}\right\}$ is not Cauchy, then there exist ${\epsilon }_{1}$ such that for every ${N}_{1}$, there exist ${n}_{1},{m}_{1}\ge {N}_{1}$ such that:

$‖{L}_{{m}_{1}}-{L}_{{n}_{1}}‖\ge {\epsilon }_{1}.$ (45)

Since $\left\{{x}_{n}\right\}$ is Cauchy we have that there exists ${N}_{2}$ such that $n,m\ge {N}_{2}$ implies:

$\underset{t\in \left[0,1\right]}{\mathrm{sup}}‖{x}_{n}\left(t\right)-{x}_{n}\left(t\right)‖<\frac{{\epsilon }_{1}}{3}.$

Let ${n}_{1},{m}_{1}\ge {N}_{2}$ such that (45) holds. Let ${\delta }_{1}$ such that for $0 we have:

$‖{x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}‖<\frac{{\epsilon }_{1}}{3},$

similarly let ${\delta }_{2}$ such that $0 we have:

$‖{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}‖<\frac{{\epsilon }_{1}}{3}.$

Take $\delta =\mathrm{min}\left\{{\delta }_{1},{\delta }_{2}\right\}$, then for $0 we have:

$‖{x}_{{n}_{1}}\left(t\right)-{x}_{{m}_{1}}\left(t\right)‖<\frac{{\epsilon }_{1}}{3}$

$‖\left({x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}^{+}+{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}^{+}\right)-\left({L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}\right)‖<\frac{{\epsilon }_{1}}{3}$

$|‖{x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}^{+}+{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}^{+}‖-‖{L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}‖|<\frac{{\epsilon }_{1}}{3},$

but $‖{L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}‖\ge {\epsilon }_{1}$ and $‖{x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}^{+}+{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}^{+}‖<\frac{2{\epsilon }_{1}}{3}$, therefore:

${\epsilon }_{1}\le ‖{L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}‖-‖{x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}^{+}+{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}^{+}‖<\frac{{\epsilon }_{1}}{3}$

${\epsilon }_{1}\le ‖{L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}‖<\frac{{\epsilon }_{1}}{3}+‖{x}_{{n}_{1}}\left(t\right)-{L}_{{n}_{1}}^{+}+{x}_{{m}_{1}}\left(t\right)-{L}_{{m}_{1}}^{+}‖$

${\epsilon }_{1}\le ‖{L}_{{m}_{1}}^{+}-{L}_{{n}_{1}}^{+}‖<\frac{{\epsilon }_{1}}{3}+\frac{{\epsilon }_{1}}{3}+\frac{{\epsilon }_{1}}{3}={\epsilon }_{1},$

which is a contradiction, therefore the sequence $\left\{{L}_{n}^{+}\right\}$ converges to a limit ${L}^{+}$.

Now let $\epsilon >0$, let $N\in ℕ$ and $\delta$ such that for $0 :

$‖{x}_{N}\left(t\right)-{x}^{*}\left(t\right)‖<\frac{\epsilon }{3}$

$‖{x}_{N}\left(t\right)-{L}_{N}^{+}‖<\frac{\epsilon }{3}$

$‖{L}_{N}^{+}-{L}^{+}‖<\frac{\epsilon }{3},$

then:

$‖{x}^{*}\left(t\right)-{L}^{+}‖\le ‖{x}_{N}\left(t\right)-{x}^{*}\left(t\right)‖+‖{x}_{N}\left(t\right)-{L}_{N}^{+}‖+‖{L}_{N}^{+}-{L}^{+}‖<\frac{\epsilon }{3}+\frac{\epsilon }{3}+\frac{\epsilon }{3}=\epsilon .$

Therefore ${L}^{+}$ is a right limit of ${x}^{*}$ at $t={t}_{0}$, hence the right limits exist. Repeating the process for the left limits we conclude that the left limits of ${x}^{*}$ exist where required.

Now let ${t}_{0}\in \left[0,1\right]$, ${t}_{0}$ not a dyadic rational, then each ${x}_{n}$ is continuous at ${t}_{0}$, therefore $\underset{n\to \infty }{\mathrm{lim}}{x}_{n}={x}^{*}$ is continuous at ${t}_{0}$. Then we have ${x}^{*}\in {\mathcal{C}}_{\Delta }$, which concludes the proof.

We make use of the following:

Lemma 9.  Let the dyadic intervals ${\Delta }_{k}\left(\mathcal{l}\right)=\left[\frac{\mathcal{l}-1}{{2}^{k}},\frac{\mathcal{l}}{{2}^{k}}\right)$, $\mathcal{l}\in {\Gamma }_{k}$ with

${\Gamma }_{k}=\left\{1,\cdots ,{2}^{k}\right\}$ and let ${\chi }_{{\Delta }_{k}\left(\mathcal{l}\right)}\left(t\right)$ be the characteristic function of the interval ${\Delta }_{k}\left(\mathcal{l}\right)$, then:

${\pi }_{k}{\int }_{0}^{t}\text{ }{\chi }_{{\Delta }_{k}\left(\mathcal{l}\right)}\left(s\right)\text{d}s=\left\{\begin{array}{ll}0,\hfill & 0\le t<\frac{\mathcal{l}-1}{{2}^{k}}\hfill \\ \frac{1}{{2}^{k+1}},\hfill & \frac{\mathcal{l}-1}{{2}^{k}}\le t<\frac{\mathcal{l}}{{2}^{k}}\hfill \\ \frac{1}{{2}^{k}},\hfill & \frac{\mathcal{l}}{{2}^{k}}\le t<\frac{\mathcal{l}-1}{1}\hfill \end{array}$ (46)

Now we state:

Theorem 10. Let the Banach space ${\mathcal{C}}_{\Delta }$, the approximated monodromy operator ${\mathcal{U}}_{k}$ converges in the strong operator sense to the monodromy operator U.

Proof. Let ${\pi }_{k}$ denote the projection mapping that takes the Walsh expansion $f\left(t\right)={\sum }_{\mathcal{l}=0}^{\infty }\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$ to the Walsh approximation $\stackrel{^}{f}\left(t\right)={\sum }_{\mathcal{l}=0}^{{2}^{k}}\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$. Without risk of confusion we will denote as ${\mathcal{U}}_{k}\phi \left(\theta \right)$ the operator ${\mathcal{U}}_{k}{\pi }_{k}\phi \left(\theta \right)$ and as ${\mathcal{T}}_{k}\phi \left(\theta \right)$ to the operator ${\mathcal{T}}_{k}{\pi }_{k}\phi \left(\theta \right)$.

Denote as $x\left(t\right)=T\phi \left(\theta \right)$ the solution of (20) corresponding to an initial condition $\phi \left(\theta \right)$ for $\theta \in \left[-\tau ,0\right]$. Likewise denote as $\stackrel{^}{x}\left(t\right)={\mathcal{T}}_{k}\stackrel{^}{\phi }\left(\theta \right)$ the solu- tion of (21). Clearly $x\left(t\right)=U\phi \left(\theta \right)$ and $\stackrel{^}{x}\left(t\right)={\mathcal{U}}_{k}\phi \left(\theta \right)$ for $t\in \left[\omega -\tau ,\omega \right]$

Let $\stackrel{^}{A}\left(t\right)={\pi }_{k}A\left(t\right)$, $\stackrel{^}{B}\left(t\right)={\pi }_{k}B\left(t\right)$ and $\stackrel{^}{\phi }\left(t\right)={\pi }_{k}\phi \left(t\right)$. Then:

$‖\stackrel{^}{x}\left(t\right)-x\left(t\right)‖\le ‖{\pi }_{k}x\left(t\right)-x\left(t\right)‖+‖\stackrel{^}{x}\left(t\right)-{\pi }_{k}x\left(t\right)‖.$ (47)

Since the solution $x\left(t\right)$ is continuous we have that the first term on the right converges to 0 as $k\to \infty$. We now observe that $\stackrel{^}{x}\left(t\right)$ and $x\left(t\right)$ satisfy:

$\begin{array}{c}\stackrel{^}{x}\left(t\right)=\phi \left(0\right)+{\pi }_{k}{\int }_{0}^{t}\text{ }\stackrel{^}{A}\left(s\right)\stackrel{^}{x}\left(s\right)\text{d}s+{\pi }_{k}{\int }_{0}^{t}\text{ }S\left(t-\tau \right)\stackrel{^}{B}\left(s\right)\stackrel{^}{x}\left(s-\tau \right)\text{d}s\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\pi }_{k}{\int }_{0}^{t}\left(1-S\left(t-\tau \right)\right)\stackrel{^}{B}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)\text{d}s\end{array}$ (48)

and

$\begin{array}{c}x\left(t\right)=\phi \left(0\right)+{\int }_{0}^{t}A\left(s\right)x\left(s\right)\text{d}s+{\int }_{0}^{t}\text{ }S\left(t-\tau \right)B\left(s\right)x\left(s-\tau \right)\text{d}s\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{0}^{t}\left(1-S\left(t-\tau \right)\right)B\left(s\right)\phi \left(s-\tau \right)\text{d}s,\end{array}$ (49)

respectively. Set $\mathcal{A}\left(t\right)=A\left(t\right)+S\left(t-\tau \right)B\left(t\right)$, $\mathcal{B}\left(t\right)=\left(1-S\left(t-\tau \right)\right)B\left(t\right)$, the same way set $\stackrel{^}{\mathcal{A}}\left(t\right)$ and $\stackrel{^}{\mathcal{B}}\left(t\right)$ as their respective approximations. Define ${y}_{k}\left(t\right)=\stackrel{^}{x}\left(t\right)-{\pi }_{k}x\left(t\right)$. We have:

$\begin{array}{c}{y}_{k}\left(t\right)={\pi }_{k}{\int }_{0}^{t}\stackrel{^}{\mathcal{A}}\left(s\right)\stackrel{^}{x}\left(s\right)\text{d}s+{\pi }_{k}{\int }_{0}^{t}\stackrel{^}{\mathcal{A}}\left(s\right){\pi }_{k}x\left(s\right)\text{d}s-{\pi }_{k}{\int }_{0}^{t}\text{ }\mathcal{A}\left(s\right)x\left(s\right)\text{d}s\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\pi }_{k}\left[{\int }_{0}^{t}\text{ }\stackrel{^}{\mathcal{B}}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)\text{d}s-\mathcal{B}\left(s\right)\phi \left(s-\tau \right)\text{d}s\right],\end{array}$ (50)

$\begin{array}{c}{y}_{k}\left(t\right)={\pi }_{k}{\int }_{0}^{t}\stackrel{^}{\mathcal{A}}\left(s\right)\stackrel{^}{x}\left(s\right)\text{d}s+{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{A}}\left(s\right){\pi }_{k}x\left(s\right)-\mathcal{A}\left(s\right)x\left(s\right)\right]\text{d}s\text{ }\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{B}}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)-\mathcal{B}\left(s\right)\phi \left(s-\tau \right)\right]\text{d}s.\end{array}$ (51)

We have that the right side of (51) is piecewise constant, so if we define the dyadic intervals ${\Delta }_{k}\left(\mathcal{l}\right)=\left[\frac{\mathcal{l}-1}{{2}^{k}},\frac{\mathcal{l}}{{2}^{k}}\right)$, for $\mathcal{l}\in \Gamma$ with $\Gamma =\left\{1,\cdots ,{2}^{k}\right\}$, we can write:

${\pi }_{k}{\int }_{0}^{t}\stackrel{^}{\mathcal{A}}\left(s\right){y}_{k}\left(s\right)\text{d}s={\pi }_{k}{\int }_{0}^{t}\text{ }\underset{q=1}{\overset{{2}^{k}}{\sum }}\text{ }\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right){y}_{k}\left(\frac{q-1}{{2}^{k}}\right){\chi }_{{\Delta }_{k}\left(q\right)}\left(s\right)\text{d}s,$ (52)

where ${\chi }_{{\Delta }_{k}\left(q\right)}\left(t\right)$ is the characteristic function of the interval ${\Delta }_{k}\left(q\right)$. From Lemma 9, we will have for $t\in {\Delta }_{k}\left(\mathcal{l}\right)$ :

${\pi }_{k}{\int }_{0}^{t}\stackrel{^}{\mathcal{A}}\left(s\right){y}_{k}\left(s\right)=\frac{1}{{2}^{k}}\underset{q=1}{\overset{\mathcal{l}-1}{\sum }}\text{ }\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right){y}_{k}\left(\frac{q-1}{{2}^{k}}\right)+\frac{1}{{2}^{k+1}}\stackrel{^}{\mathcal{A}}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right){y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right).$ (53)

Since ${y}_{k}\left(t\right)$ is constant on the dyadic intervals then ${y}_{k}\left(t\right)={y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)$ for $t\in {\Delta }_{k}\left(\mathcal{l}\right)$. Therefore:

$\begin{array}{c}‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\le \frac{1}{{2}^{k}}‖\underset{q=1}{\overset{\mathcal{l}-1}{\sum }}\text{ }\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right){y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖+\frac{1}{{2}^{k+1}}‖\stackrel{^}{\mathcal{A}}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{0\le t\le 1}{\mathrm{max}}‖{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{B}}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)-\mathcal{B}\left(s\right)\phi \left(s-\tau \right)\right]\text{d}s‖\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{0\le t\le 1}{\mathrm{max}}‖{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{A}}\left(s\right){\pi }_{k}{x}_{1}\left(s\right)-\mathcal{A}\left(t\right){x}_{1}\left(t\right)\right]\text{d}s‖.\end{array}$ (54)

Define ${C}_{k}^{*}=1-\frac{1}{{2}^{k+1}}\underset{q\in \Gamma }{\mathrm{max}}‖\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right)‖$, and assume ${C}_{k}^{*}>0$. This restraint is not significant since $A\left(t\right)$ and $B\left(t\right)$ are bounded, therefore $\stackrel{^}{A}\left(t\right)$ and $\stackrel{^}{B}\left(t\right)$ are bounded, and $\frac{1}{{2}^{k+1}}\to 0$ as $k\to \infty$.

We have:

$‖\underset{q=1}{\overset{\mathcal{l}-1}{\sum }}\text{ }\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right){y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖\le \underset{q\in \Gamma }{\mathrm{max}}‖\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right)‖‖\underset{q=1}{\overset{\mathcal{l}-1}{\sum }}\text{ }{y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖.$ (55)

From all of the above and defining ${D}_{k}^{*}\triangleq \frac{1}{{2}^{k}}\frac{{A}_{k}^{*}}{{C}_{k}^{*}}$, with ${A}_{k}^{*}=\underset{q\in \Gamma }{\mathrm{max}}‖\stackrel{^}{\mathcal{A}}\left(\frac{q-1}{{2}^{k}}\right)‖$ and denoting:

$\begin{array}{l}{B}_{k}^{*}=\frac{1}{{C}_{k}^{*}}\left[\underset{0\le t\le 1}{\mathrm{max}}‖{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{A}}\left(s\right){\pi }_{k}x\left(s\right)-\mathcal{A}\left(t\right)x\left(t\right)\right]\text{d}s‖\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{0\le t\le 1}{\mathrm{max}}‖{\pi }_{k}{\int }_{0}^{t}\left[\stackrel{^}{\mathcal{B}}\left(s\right)\stackrel{^}{\phi }\left(s-\tau \right)-\mathcal{B}\left(s\right)\phi \left(s-\tau \right)\right]\text{d}s‖\right],\end{array}$

we arrive at:

$‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\le {B}_{k}^{*}+{D}_{k}^{*}\underset{q=1}{\overset{\mathcal{l}-1}{\sum }}‖{y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖.$ (56)

Which gives:

$‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\le {B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{\mathcal{l}-1}.$ (57)

Indeed, for $\mathcal{l}=1$, $‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\le {B}_{k}^{*}$, and assuming ${B}_{k}^{*}+{D}_{k}^{*}\underset{q=1}{\overset{\mathcal{l}-2}{\sum }}‖{y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖\le {B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{\mathcal{l}-2}$ we have:

$\begin{array}{c}‖{y}_{k}\left(\frac{\mathcal{l}-1}{{2}^{k}}\right)‖\le {B}_{k}^{*}+{D}_{k}^{*}\underset{q=1}{\overset{\mathcal{l}-2}{\sum }}‖{y}_{k}\left(\frac{q-1}{{2}^{k}}\right)‖+{D}_{k}^{*}‖{y}_{k}\left(\frac{\mathcal{l}-2}{{2}^{k}}\right)‖\\ \le {B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{\mathcal{l}-2}+{D}_{k}^{*}{B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{\mathcal{l}-2}\\ \le {B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{\mathcal{l}-1}.\end{array}$ (58)

For $t\in {\Delta }_{k}\left(\mathcal{l}\right)$ we have $\frac{\mathcal{l}-1}{{2}^{k}}\le t$, then $\mathcal{l}-1\le {2}^{k}t$, therefore:

$‖{y}_{k}\left(t\right)‖\le {B}_{k}^{*}{\left(1+{D}_{k}^{*}\right)}^{{2}^{k}t}.$ (59)

Taking into account the definition of ${D}_{k}^{*}$ we have:

${\left(1+{D}_{k}^{*}\right)}^{{2}^{k}t}\le {\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{{2}^{k}t}\le {\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{{2}^{k}t-\frac{{A}_{k}^{*}}{2}t}{\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{\frac{{A}_{k}^{*}}{2}t}.$ (60)

Since $t\in \left[0,1\right]$ and we assumed $1-\frac{1}{{2}^{k+1}}{A}_{k}^{*}>0$, then:

${\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{{2}^{k}t-\frac{{A}_{k}^{*}}{2}t}\le {\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}+\frac{1}{2!}{\left(\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{2}+\cdots \right)}^{{2}^{k}t-\frac{{A}_{k}^{*}}{2}t}\le {\text{e}}^{{A}_{k}^{*}t}\le {\text{e}}^{{A}_{k}^{*}}$ (61)

and

${\left(1+\frac{{A}_{k}^{*}}{{2}^{k}-\frac{{A}_{k}^{*}}{2}}\right)}^{\frac{{A}_{k}^{*}}{2}t}={\left(1+{D}_{k}^{*}\right)}^{\frac{{A}_{k}^{*}}{2}t}\le {\left(1+{D}_{k}^{*}\right)}^{\frac{{A}_{k}^{*}}{2}}.$ (62)

From (61), (62) and (59) we arrive at:

$‖{y}_{k}\left(t\right)‖\le {B}_{k}^{*}{\text{e}}^{{A}_{k}^{*}}{\left(1+{D}_{k}^{*}\right)}^{\frac{{A}_{k}^{*}}{2}},\text{ }\forall t\in \left[0,1\right].$ (63)

The expresion ${\text{e}}^{{A}_{k}^{*}}{\left(1+{D}_{k}^{*}\right)}^{\frac{{A}_{k}^{*}}{2}}$ is bounded, and clearly $x\in {\mathcal{C}}_{\Delta }$, and since $\phi \left(t-\omega \right)\in {\mathcal{C}}_{\Delta }$, we have ${B}_{k}^{*}\to 0$ as $k\to \infty$, therefore we have $‖{y}_{k}‖\to 0$ as $k\to \infty$, this is:

$‖{\mathcal{T}}_{k}\phi -T\phi ‖\underset{k\to \infty }{\to }0,\text{ }\forall \phi \in {\mathcal{C}}_{\Delta },$ (64)

from where it follows immediately that:

$‖{\mathcal{U}}_{k}\phi -U\phi ‖\underset{k\to \infty }{\to }0,\text{ }\forall \phi \in {\mathcal{C}}_{\Delta },$ (65)

which concludes the proof.

Corollary 1. If $\phi$ is Lipschitz, then the error of the approximation ${\mathcal{U}}_{k}$, satisfies $‖{\mathcal{U}}_{k}\phi -U\phi ‖\in O\left(\frac{1}{{2}^{k}}\right)$.

Proof. From (59), we have that since $\phi$ is Lipschitz and $x\left(t\right)$ is differentiable, the result follows immediately from Lemma 3.

We now prove that the approximated solution is indeed equal to the Walsh approximation of the exact solution. First we state:

Lemma 11.  Let ${\sum }_{\mathcal{l}=0}^{\infty }\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$ be the Walsh expansion of a function $f\left(t\right)$, if ${\sum }_{\mathcal{l}=0}^{\infty }\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$ converges to zero everywhere, then ${c}_{i}=0$, $i\ge 0$.

Now we are ready to prove:

Theorem 12. Let $x\left(t\right)=T\phi \left(\theta \right)$, the solution of (20) corresponding to an initial condition $\phi \left(\theta \right)$ for $\theta \in \left[-\tau ,0\right]$. Likewise let $\stackrel{^}{x}\left(t\right)={\mathcal{T}}_{k}\stackrel{^}{\phi }\left(\theta \right)$, the solution of (21), then $\stackrel{^}{x}\left(t\right)={\pi }_{k}x\left(t\right)$.

Proof. We have $\stackrel{^}{x}\left(t\right)={\sum }_{\mathcal{l}=0}^{{2}^{k}}\text{ }{c}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$ and ${\pi }_{k}x\left(t\right)={\sum }_{\mathcal{l}=0}^{{2}^{k}}\text{ }{d}_{\mathcal{l}}{w}_{\mathcal{l}}\left(t\right)$, for some coefficients ${c}_{\mathcal{l}}$ and ${d}_{\mathcal{l}}$, then :

$‖{\pi }_{k}x\left(t\right)-\stackrel{^}{x}\left(t\right)‖\le ‖x\left(t\right)-\stackrel{^}{x}\left(t\right)‖+‖{\pi }_{k}x\left(t\right)-x\left(t\right)‖.$ (66)

Since the solution $x\left(t\right)$ is continuous and from (64) we have that the two terms on the right converge to zero, therefore ${\sum }_{\mathcal{l}=0}^{{2}^{k}}\left({c}_{\mathcal{l}}-{d}_{\mathcal{l}}\right)w\mathcal{l}\left(t\right)$ converges to zero uniformly, hence everywhere, and from Lemma 11. we have ${c}_{\mathcal{l}}={d}_{\mathcal{l}}$, for $\mathcal{l}=0,\cdots ,{2}^{k}$, this is $\stackrel{^}{x}\left(t\right)={\pi }_{k}x\left(t\right)$

Corollary 2. ${\mathcal{U}}_{k}\phi \left(t\right)={\pi }_{k}U\phi \left(t\right)$,

Lemma 13. Let $X\subset {\mathcal{C}}_{\Delta }$ compact, then for very $\epsilon >0$, there exists $k\in ℕ$, such that:

$‖{\pi }_{k}x-x‖<\epsilon ,\text{ }\forall x\in X.$ (67)

Proof. Let $\epsilon >0$, since X is compact, there exist finite open balls of radius $\frac{\epsilon }{3}$ centered around finite ${x}_{1},\cdots ,{x}_{N}\in X$ that cover X, then $\forall x\in X$, $‖x-{x}_{i}‖<\frac{\epsilon }{3}$, for some $i\in \left\{1,\cdots ,N\right\}$. Let k such that for any $i\in \left\{1,\cdots ,N\right\}$, $‖{\pi }_{k}{x}_{i}-{x}_{i}‖<\frac{\epsilon }{3}$. Let $x\in X$, then for some $i\in \left\{1,\cdots ,N\right\}$ :

$\begin{array}{c}‖{\pi }_{k}x-x‖\le ‖{\pi }_{k}x-{\pi }_{k}{x}_{i}‖+‖x-{x}_{i}‖+‖{\pi }_{k}{x}_{i}-{x}_{i}‖\\ \le ‖{\pi }_{k}‖‖x-{x}_{i}‖+‖x-{x}_{i}‖+‖{\pi }_{k}{x}_{i}-{x}_{i}‖<\epsilon .\end{array}$

Since there are finitely many ${x}_{i}$, the desired result follows immediately.

The solution of (1) will be given by:

$x\left(t\right)=X\left(t,0\right)\phi \left(0\right)+{\int }_{-\tau }^{0}X\left(t,s+\tau \right)B\left(s+\tau \right)\phi \left(s\right)\text{d}s,$ (68)

where X is a solution matrix such that $X\left(0,0\right)=I$ and $X\left(t,0\right)=0$ for $t<0$  . If $\phi \in {\mathcal{C}}_{\Delta }$ then the solution $x\left(t\right)$ will be continuous and the solution matrix X will be the same that in $\mathcal{C}$. Furthermore we will have a bounded operator:

$‖x\left(t\right)‖=‖T\left(\phi \left(\theta \right)\right)‖\le ‖T‖‖\phi \left(\theta \right)‖,$ (69)

and like in $\mathcal{C}$, we will have that T maps arbitrary bounded sequences into equicontinuous sequences, then by the Arzelá-Ascoli Theorem we will have that T is compact in ${\mathcal{C}}_{\Delta }$.

We now have:

Theorem 14 The approximated monodromy operator ${\mathcal{U}}_{k}$ converges uni- formly to the monodromy operator U.

Proof. Let $\stackrel{¯}{B}$ denote the closed unit ball on ${\mathcal{C}}_{\delta }$. Since T is compact, then the image $T\stackrel{¯}{B}$ is also compact, and by Lemma 13 we will have that for any $\epsilon >0$ there exists k such that:

$‖{\mathcal{T}}_{k}-T‖=\underset{‖x‖=1}{\mathrm{sup}}‖{\pi }_{k}Tx-Tx‖<\epsilon .$ (70)

The fact that $‖{\mathcal{U}}_{k}-U‖\to 0$ as $k\to \infty$ follows immediately.

With this we can establish convergence of the spectrum:

Theorem 15. The spectrum of the approximated monodromy operator ${\mathcal{U}}_{k}$ converges to the spectrum of the monodromy operator U. More precisely, for any open set $\mathcal{V}$, such that $\sigma \left(\mathcal{U}\right)\subset \mathcal{V}$, then there exist K such that $\sigma \left({\mathcal{U}}_{k}\right)\subset \mathcal{V}$, for any $k\ge K$. Furthermore, let ${\lambda }_{0}\in \sigma \left(U\right)$ and let $\Gamma$ be a small circle centered at ${\lambda }_{0}$ such that any other eigenvalue of U is outside $\Gamma$, then the sum of the multiplicities of the eigenvalues of $\mathcal{U}$ within $\Gamma$ will be equal to the multiplicity of ${\lambda }_{0}$.

Proof. Let $\Gamma$ be a small circle with center at ${\lambda }_{0}\in \sigma \left(U\right)$, such that any other eigenvalue of U is outside this circle. Then the spectral projection of ${\lambda }_{0}$ is given by:

$P=\frac{1}{2\text{π}i}{\oint }_{\Gamma }{\left(\lambda I-U\right)}^{-1}\text{d}\lambda .$ (71)

The spectral projection of the spectrum of ${\mathcal{U}}_{k}$ inside $\Gamma$ will be given by:

${P}_{k}=\frac{1}{2\text{π}i}{\oint }_{\Gamma }{\left(\lambda I-{\mathcal{U}}_{k}\right)}^{-1}\text{d}\lambda .$ (72)

Since $‖U-{\mathcal{U}}_{k}‖\to 0$, then ${P}_{k}$ converges uniformly to P, therefore, there must exist k such that part of the spectrum of ${\mathcal{U}}_{k}$ is in $\Gamma$.

Since P and ${P}_{k}$ are finite dimensional operators and since ${P}_{k}$ converges uniformly to P, we will have that for sufficiently large k:

$dim\left(P\right)=dim\left({P}_{k}v\right),$ (73)

this is, the algebraic multiplicity of ${\lambda }_{0}$ will be the same as the algebraic multiplicity of the spectrum of ${\mathcal{U}}_{k}$ contained in $\Gamma$.

6. Approximation of the Solution of a Delayed Mathieu Equation

If one considers a generalization of a Mathieu Differential Equation to a Functional Differential Equation, we will find that we can consider different cases depending on where the parametric excitation is placed  . We consider the scalar equation:

$\stackrel{¨}{x}+\left(\alpha +\beta cost\right)x=\left(\gamma cost\right)x\left(t-\tau \right).$ (74)

The operator ${\mathcal{T}}_{k}$ in (33) provides a natural way to approximate the solution of a periodic delay differential equation. Consider Equation (74), with $\alpha =5.35$,

$\beta =2.5$, $\gamma =0.5$, and $\tau =\frac{\text{2π}}{128}$. Figure 3 and Figure 4, show the comparison

between the solution obtained by simulation and the approximation of the solution obtained from ${\mathcal{T}}_{k}$, for $k=7$ and $k=10$ respectively, for $t\in \left[0,2\text{π}\right]$.

Figure 5 and Figure 6, show the magnitude of the error for the appro- ximation of the solution obtained with the approximated operator ${\mathcal{T}}_{k}$, for $k=7$ and $k=10$, respectively.

Figure 3. Solution of (74) for $t\in \left[0,2\text{π}\right]$. The solid line corresponds to the simulated solution and the dashed line corresponds to the approximated solution for $k=7$.

Figure 4. Solution of (74) for $t\in \left[0,2\text{π}\right]$. The solid line corresponds to the simulated solution and the dashed line corresponds to the approximated solution for $k=10$.

Figure 5. Magnitude of error of the approximated solution corresponding to $k=7$.

Figure 6. Magnitude of error of the approximate solution corresponding to $k=10$.

7. Stability Chart of the Delayed Mathieu Equation

We will use the approximated monodromy operator to determine the stability chart of a particular case of delayed Mathieu equation:

$\stackrel{¨}{x}\left(t\right)+\left(\alpha +\beta \mathrm{cos}\left(t\right)\right)x\left(t\right)=\gamma \mathrm{cos}\left(t\right)x\left(t-\tau \right).$ (75)

Stability charts are used to to determine the stability of periodic differential equations with respect to some parameters. Figure 7 shows the stability diagram

of (75) for the plane $\alpha \beta$ with $\tau =\frac{\text{2π}}{32}$ and $\gamma =1.5$. For this equation stability

zones are disconnected and there is no symmetry with respect to the horizontal axis, contrary to the case of the undelayed Mathieu equation.

Now consider the equation:

$\stackrel{¨}{x}\left(t\right)+\left[\alpha +\beta \mathrm{cos}\left(2\text{π}t\right)\right]x\left(t\right)=\delta x\left(t-\tau \right).$ (76)

This equation has been studied in  . Figure 8 shows the stability diagram for the parametric plane $\alpha \beta$ with $\delta =0.15$ and $\tau =\frac{\text{2π}}{16}$.

8. Conclusion

The use of Walsh functions provides the finite dimensional approximation (35) of the monodromy operator. This approximation and the analysis leading to it are virtually the same for any piecewise constant orthonormal basis which can be formed by linear combinations of Walsh functions such as Block Pulse Functions. The use of Block Pulse Functions provides a computationally

Figure 7. Stability diagram for the parametric plane αβ of Equation (75). Asymptotically stable zones (grey) and unstable zones (white) are shown.

Figure 8. Stability diagram for the parametric plane αβ of Equation (76). Asymptotically stable zones (grey) and unstable zones (white) are shown.

inexpensive method that is useful when obtaining stability diagrams. The rate of decay of the error will be maintained regardless of the orthonormal set used  . The use of Block Pulse Functions provides a method with a light computational load, due to the simplicity of the functions and the sparse structure of the involved matrices. Implementation of the algorithm is straightforward, it is only necessary to compute matrices (42) corresponding to the matrices $A\left(t\right)$ and $B\left(t\right)$ in (1), and to substitute the remaining matrices in Section 4.1. Furthermore, the approximated monodromy operator might be used to provide insight in the nature of the PDDE, specially with a second order equation if the solution space is confined to a two dimensional vector space  , since a similar approximation with the use of Block Pulse Functions has been used to analytically prove properties of Periodic Ordinary Differential Equations.  . However a downside of the proposed method lies in the rate of convergence, which for certain cases is slower than the convergence of Fourier functions, and is certainly slower than the rate of convergence of approximations with, for example, Chebyshev polynomials.

Fund

This work was supported by CONACyT, Mexico, CVU 418725.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Vazquez, E.A. and Collado, J. (2018) Finite Dimensional Approximation of the Monodromy Operator of a Periodic Delay Differential Equation with Piecewise Constant Orthonormal Func- tions. Applied Mathematics, 9, 1315-1337. https://doi.org/10.4236/am.2018.911086

References

1. 1. Kolmanovskii, V. and Myshkis, A. (2013) Introduction to the Theory and Applications of Functional Differential Equations. Springer Science & Business Media, Vol. 463.

2. 2. Cui, H.-Y., Han, Z.-J. and Xu, G.-Q. (2016) Stabilization for Schrdinger Equation with a Time Delay in the Boundary Input. Applicable Analysis, 95, 963-977. https://doi.org/10.1080/00036811.2015.1047830

3. 3. Srividhya, J. and Gopinathan, M. (2006) A Simple Time Delay Model for Eukaryotic Cell Cycle. Journal of Theoretical Biology, 241, 617-627. https://doi.org/10.1016/j.jtbi.2005.12.020

4. 4. Datko, R., Lagnese, J. and Polis, M. (1986) An Example on the Effect of Time Delays in Boundary Feedback Stabilization of Wave Equations. SIAM Journal on Control and Optimization, 24, 152-156. https://doi.org/10.1137/0324007

5. 5. Datko, R. and You, Y. (1991) Some Second-Order Vibrating Systems Cannot Tolerate Small Time Delays in Their Damping. Journal of Optimization Theory and Applications, 70, 521-537. https://doi.org/10.1007/BF00941300

6. 6. Gilsinn, D.E., Potra, F.A., et al. (2006) Integral Operators and Delay Differential Equations. Journal of Integral Equations and Applications, 18, 297-336. https://doi.org/10.1216/jiea/1181075393

7. 7. Breda, D., Maset, S. and Vermiglio, R. (2014) Pseudospectral Methods for Stability Analysis of Delayed Dynamical Systems. International Journal of Dynamics and Control, 2, 143-153. https://doi.org/10.1007/s40435-013-0041-x

8. 8. Insperger, T. and Stepan, G. (2002) Semi-Discretization Method for Delayed Systems. International Journal for Numerical Methods in Engineering, 55, 503-518. https://doi.org/10.1002/nme.505

9. 9. Vazquez, E.A. and Collado, J. (2016) Monodromy Operator Approximation of Periodic Delay Differential Equations by Walsh Functions. 13th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), IEEE.

10. 10. Chen, W.-L. and Shih, Y.-P. (1978) Shift Walsh Matrix and Delay-Differential Equations. IEEE Transactions on Automatic Control, 23, 1023-1028. https://doi.org/10.1109/TAC.1978.1101888

11. 11. Kreyszig, E. (1978) Introductory Functional Analysis with Applications, Ser. Wiley Classics Library. John Wiley & Sons.

12. 12. Stokes, A. (1962) A Floquet Theory for Functional Differential Equation. Proceedings of the National Academy of Sciences of the United States of America, 48, 1330. https://doi.org/10.1073/pnas.48.8.1330

13. 13. Halanay, A. (1966) Differential Equations: Stability, Oscillations, Time Lags. Academic Press, 23.

14. 14. Yakubovich, V.A. and Starzhinski, V.M. (1975) Linear Differential Equations with Periodic Coefficients. Vol. 1, John Wiley, New York.

15. 15. Magnus, W. and Winkler, S. (2004) Hill’s Equation, Ser. Dover Books on Mathematics Series. Dover Publications.

16. 16. Walsh, J.L. (1923) A Closed Set of Normal Orthogonal Functions. American Journal of Mathematics, 45, 5-24. https://doi.org/10.2307/2387224

17. 17. Golubov, B., Efimov, A. and Skvortsov, V. (2012) Walsh Series and Transforms: Theory and Applications. Springer Science & Business Media, Berlin, Vol. 64.

18. 18. Ahmed, N. and Rao, K.R. (2012) Orthogonal Transforms for Digital Signal Proces-sing. Springer Science & Business Media, Berlin.

19. 19. Karanam, V., Frick, P. and Mohler, R. (1978) Bilinear System Identification by Walsh Functions. IEEE Transactions on Automatic Control, 23, 709-713. https://doi.org/10.1109/TAC.1978.1101806

20. 20. Gát, G. and Toledo, R. (2015) Estimating the Error of the Numerical Solution of Linear Differential Equations with Constant Coefficients via Walsh Polynomials. Acta Mathematica Academiae Paedagogicae Nyíregyháziensis, 31, 309-330.

21. 21. Fine, N.J. (1949) On the Walsh Functions. Transactions of the American Mathematical Society, 65, 372-414. https://doi.org/10.1090/S0002-9947-1949-0032833-2

22. 22. Kwong, C. and Chen, C. (1981) The Convergence Properties of Block-Pulse Series. International Journal of Systems Science, 12, 745-751. https://doi.org/10.1080/00207728108963780

23. 23. Rao, G. and Srinivasan, T. (1978) Remarks on “Author’s Reply”; to “Comments on Design of Piecewise Constant Gains for Optimal Control via Walsh Functions”. IEEE Transactions on Automatic Control, 23, 762-763. https://doi.org/10.1109/TAC.1978.1101782

24. 24. Cheng, C., Tsay, Y. and Wu, T. (1977) Walsh Operational Matrices for Fractional Calculus and Their Application to Distributed Systems. Journal of the Franklin Institute, 303, 267-284. https://doi.org/10.1016/0016-0032(77)90029-1

25. 25. Gulamhusein, M. (1973) Simple Matrix-Theory Proof of the Discrete Dyadic Convolution Theorem. Electronics Letters, 9, 238-239. https://doi.org/10.1049/el:19730172

26. 26. Hale, J.K. and Lunel, S.M.V. (2013) Introduction to Functional Differential Equations. Springer Science & Business Media, Berlin, Vol. 99.

27. 27. Deb, A., Sarkar, G. and Sen, S.K. (1994) Block Pulse Functions, the Most Fundamental of All Piecewise Constant Basis Functions. International Journal of Systems Science, 25, 351-363. https://doi.org/10.1080/00207729408928964

28. 28. Norkin, S. (1973) Introduction to the Theory and Application of Differential Equations with Deviating Arguments. Academic Press, Cambridge, Vol. 105. https://doi.org/10.1016/S0076-5392(08)62969-0

29. 29. Franco, C. and Collado, J. (2017) A Novel Discriminant Approximation of Periodic Differential Equations.