1

 Hilbert Space
 H_{2 }and H_{¥} Functions
 State Space Computation of H_{2} and H_{¥}_{ }norms

2

 Inner Product: Let V be a vector
space over C. An inner product on V is a complex valued function, <·, ·>: V ´ V ® C
 Such that for any x, y, z ÎV and a, bÎC
 (i) <x, ay+bz>=a<x,y>+b<x,z>
 (ii) <x,y>=<y,z>* (complex conjugate)
 (iii) <x,x> >0 if x¹ 0.
 Inner product on C^{n}:
 x and y are orthogonal if Ð (x,y)= ½p

3

 A vector space V with an inner product is called an inner product
space.
 Inner product induced norm x
:= Ö<x,x>
 Distance between vectors x and y : d(x,y) = x  y .
 Two vectors x and y are orthogonal if <x,y> = 0, denoted x ^ y.
 Properties of Inner Product:
 <x, y>£x y (CauchySchwarz inequality).
Equality holds iff x=ay for some constant a or y=0.
 x+y^{2}+xy^{2}=2x^{2}+2y^{2}
(Parallelogram law)
 x+y^{2}=x^{2}+y^{2} if x ^ y.

4

 Hilbert Space: a complete inner product space. (We shall not discuss the
completeness here.)
 Examples:
 C^{n} with the usual
inner product.
 C^{n ×m} with the following inner product
 <A, B> := Trace A*B
" A,
B Î C^{n
×m}
 L_{2}[a,b]: all square integrable and Lebesgue measurable
functions defined on an interval [a,b] with the inner product
 <f, g> := _{a}ò^{b}^{ }f(t)*g(t)dt, Matrix form: <f, g> := _{a}ò^{b}^{ }Trace[
f(t)*g(t)]dt.
 L_{2 }= L_{2} (¥, ¥): <f, g> := _{ }_{¥}_{ }ò^{¥}^{ }Trace[
f(t)*g(t)]dt.
 L_{2+} =_{ }L_{2}[0,_{ }¥): subspace of L_{2}(¥, ¥).
 L_{2} = L_{2}(¥, 0]: subspace of L_{2}(¥, ¥).

5

 Let S Ì C be an
open set, and let f(s) be a complex valued function defined on S, f(s) : S ® C. Then f(s)
is analytic at a point z_{0} in S if it differentiable at z_{0}
and also at each point in some
neighborhood of z_{0}.
 It is a fact that if f(s) is
analytic at z_{0} then f has continuous derivatives of all
orders at z_{0}. Hence, it has a power series representation at z_{0}.
 A function f(s) is said to be
analytic in S if it has a derivative or is analytic at each point of S.
 Maximum Modulus Theorem: If f(s) is defined and continuous on a
closedbounded set S and analytic on the interior of S, then
 max_{s}_{Î}_{S } êf(s) ê= max_{s}_{Î¶}_{S } êf(s) ê
 where ¶S denotes the boundary of S.

6

 L_{2}(jR) Space: all complex matrix functions F such that the
integral below is bounded:
 with the inner product
 and the inner product induced norm is given by
 RL_{2}(jR) or simply RL_{2}: all real rational strictly
proper transfer matrices with no poles on the imaginary axis.

7

 H_{2} Space: a (closed) subspace of L_{2}(jR) with
functions F(s) analytic in Re(s) > 0.
 RH_{2} (real rational subspace of H_{2} ): all strictly
proper and real rational stable transfer matrices.
 H_{2}^{^} Space: the orthogonal complement of H_{2}
in L_{2}, i.e., the (closed) subspace of functions in L_{2}
that are analytic in Re(s)<0.
 RH_{2}^{^} ( the real rational subspace of H_{2}^{^} ): all
strictly proper rational antistable transfer matrices.
 Parseval’s relations: (between time domain and frequency domain)

8

 L_{¥}_{
}(jR) Space: L_{¥}_{ }(jR)
or simply L_{¥} is a Banach space of matrixvalued (or
scalarvalued) functions that are (essentially) bounded on jR, with norm
 RL_{¥}_{
}(jR) or simply RL_{¥}: all proper and real rational transfer matrices
with no poles on the imaginary axis.
 H_{¥}
Space: H_{¥} is a (closed) subspace of L_{¥} with
functions that are analytic and bounded in the open righthalf plane.
The H_{¥} norm is defined as
 The second equality can be regarded as a generalization of the maximum
modulus theorem for matrix functions. See Boyd and Desoer [1985] for a
proof.
 RH_{¥}:
all proper and real rational stable transfer matrices.

9

 H_{¥}^{}
Space: H_{¥}^{} is a (closed) subspace of L_{¥} with
functions that are analytic and bounded in the open lefthalf plane. The
H_{¥}^{} norm is defined as
 RH_{¥}^{}
: all proper real rational antistable transfer matrices.
 Examples: H_{2} functions: 1/s+1, e^{hs}/s+2, …
 H_{¥}
functions: 5, 1/s+1, 5s+1/s+2, e^{hs}/s+2, 1/s+1+0.1e^{hs},
…
 L_{¥}
functions: 5, 1/s+1,1/(s+1)(s2), 1/s1+0.1e^{hs}, …

10

 Let G(s) be a p× q transfer matrix. Then a multiplication operator is
defined as M_{G}:
L_{2 }® L_{2 }, M_{G} f=Gf
 Then
 Proof: It is clear that G_{¥} is the upper bound:
 To show that G_{¥}_{ } is the least upper bound, first choose
a frequency w_{0}
where is maximum,
i.e.,

11

 and denote the singular value decomposition of G(jw_{0}) by
 where r is the rank of G(jw_{0}) and u_{i },v_{i}
have unit length.
 If w_{0}<
¥, write v_{1}(jw_{0}) as
 where a_{i}ÎR is such that q_{i}Î(p,0]. Now let 0 £ b_{i} £ ¥ be such that
 (with b_{i}=¥ if q_{i}=0 ) and let f
be given by

12

 (with 1 replacing if q_{i}=0) where a
scalar function is chosen so that
 where e is a
small positive number and c is chosen so that has unit 2norm, i.e., This in turn implies
that f has unit 2norm.
 Similarly, if w_{0}=
¥, the
conclusion follows by letting w_{0}® ¥ in the above.

13

 Let G(s) Î L_{2}
and g(t) = L^{1} [G(s)]. Then
 Consider G(s)=C(sIA)^{1}B Î RH_{2} . Then we have
 G(s)_{2}^{2}=
trace(B*L_{0}B) = trace(CL_{c}C*)
 where L_{0} and L_{c}
are observability and controllability Gramians:
 AL_{c}+L_{c}A*+BB* = 0 A*L_{0}+L_{0}A+C*C
= 0.

14

 Proof: Note that g(t) = L^{1} [G(s)]=Ce^{At}B, t ³0, and
 Then

15

 Hypothetical inputoutput experiments:
 Apply the impulsive input d(t)e_{i }(d(t) is the unit impulse and e_{i} is the i^{th}
standard basis vector) and denote the output by z_{i} (t)( =
g(t)e_{i }). Then z_{i}Î L_{2+} ( assuming D = 0) and
 Can be used for nonlinear time varying systems.

16

 Example: Consider a transfer
matrix
 with
 Then the command h2norm(G_{s}) gives G_{s}_{2}=
0.6055 and h2norm(cjt(G_{u})) gives G_{u}_{2}=
3.182. Hence
 >> P = gram(A,B); Q = gram(A´,C´); or P = lyap(A,B*B´);
 >> [Gs,Gu] = sdecomp(G); % decompose into stable and antistable parts.

17

 Rational Functions: Let G(s) Î RL_{¥} :
 the farthest distance the Nyquist plot of G from the origin
 the peak on the Bode magnitude plot
 estimation: set up a fine grid of frequency points, {w_{1},…, w_{N}}.

18

 Characterization: Let g > 0
and Then
 where
 and R =g ^{2}ID*D.
 Proof: Let F(s) =g ^{2}IG^{~}(s)G(s).
 Then G_{¥}_{ }< g Û F(jw)>0, " wÎ R È {¥}Û detF(jw)¹ 0, " wÎ R since F(¥)=R>0 and F(jw) is continuous. Û F(s) has no imaginary
axis zero. Û F ^{1}(s) has no imaginary axis pole.
 Û H has no jw axis eigenvalues if the above realization has
neither uncontrollable modes nor unobservable modes on the imaginary
axis.

19

 We now show that the above realization for F ^{1}(s) indeed has neither
uncontrollable modes nor unobservable modes on the imaginary axis.
 Assume that jw_{0} is an eigenvalue of H but not a pole
of F ^{1}(s).
Then jw_{0}
must be either an unobservable mode of ([R^{1}D*C R^{1}B*], H) or an
uncontrollable mode of (H, ). Suppose jw_{0} is an
unobservable mode of
 ([R^{1}D*C R^{1}B*], H). Then there
exists an
such that
 Hx_{0 }= jw_{0} x_{0 },
[R^{1}D*C R^{1}B*]x_{0}=
0. Û
 (jw_{0} IA*)x_{1}=0, (jw_{0} I+A*)x_{2}=C*Cx_{1}, D*Cx_{1}+B*x_{2}=0.
 Since A has no imaginary axis eigenvalues, we have x_{1 }= 0
and x_{2 }= 0. Contradiction!!!
 Similarly, a contradiction will also be arrived if jw_{0} is assumed to be an uncontrollable
mode of (H, ).

20

 (a) select an upper bound g_{u} and a lower bound g_{l} such that
 g_{l} £ G_{¥}_{ }£ g_{u}
 (b) if (g_{u}g_{l})/g_{l} £ specified level, stop; G_{¥}_{ }» (g_{u}+g_{l})/2.
Otherwise go to next step.
 (c) set g = (g_{l} + g_{u}) /2;
 (d) test if G_{¥} < g by calculating the eigenvalues of H
with this g;
 (e) if H has an eigenvalue on jR set g_{l} = g ; otherwise set g_{u} = g ; go back to step (b).
 In all the subsequent discussions, WLOG we can assume g = 1 by a
suitable scaling since G_{¥} < g Û  g ^{1}G_{¥} <1.

21

 Estimating the H_{¥} norm experimentally: the maximum magnitude of
the steadystate response to all possible unit amplitude sinusoidal
input signals.
 z(t)=G(j w)sin(wt+ÐG(j w)) u(t)=sin wt
 Let the sinusoidal input u(t) as shown below. Then the steadystate response of the
system can be written as
 for some y_{i}, i, i= 1,2,….,p, and furthermore,
 where • is the Euclidean norm.

22

 Consider a mass/spring/damper system as shown in Figure 4.2.
 The dynamical system can be described by the following differential
equations:

23

 Suppose that G(s) is the transfer matrix from (F_{1 }, F_{2})
to (x_{1 }, x_{2}); that is,
 and suppose k_{1}=1, k_{2} =4, b_{1} =0.2, b_{2}
= 0.1, m_{1 }=1, and m_{2}=2 with appropriate units.
 >>G=pck(A,B,C,D);
 >>hinfnorm(G,0.0001) or linfnorm(G,0.0001) % relative error £0.0001
 >>w=logspace(1,1,200);
%200 points between 0.1=10^{1} and 10=10^{1};
 >>Gf=frsp(G,w); %computing frequency response;
 >>[u,s,v]=vsvd(Gf); %SVD at each frequency;
 >>vplot(‘liv,lm’,s), grid
%plot both singular values and grid.
 G_{¥}_{
}=11.47=the peak of the largest singular value Bode plot in Figure
4.3.

24


25

 Since the peak is achieved at w _{max} = 0.8483, exciting the system
using the following sinusoidal input
 gives the steadystate response of the system as
 This shows that the system response will be amplified 11.47 times for
an input signal at the frequency w_{max}, which could be undesirable if F_{1}
and F_{2} are disturbance force and x_{1} and x_{2}
are the positions to be kept steady.

26

 Example 2: Consider a twobytwo transfer matrix
 A statespace realization of G can be obtained by using the following
MATLAB commands:
 >>G11=nd2sys([10,10],[1,0.2,100]);
 >>G12=nd2sys(1,[1,1]);
 >>G21=nd2sys([1,2],[1,0.1,10]);
 >>G22=nd2sys([5,5],[1,5,6]);
 >>G=sbs(abv(G11,G21),abv(G12,G22));
 Next, we setup a frequency grid to compute the frequency response of G
and the singular values of G(jw) over a suitable range of frequency.
 >>w = logspace(0,2,200); %
200 points between 1=10^{0} and 100=10^{2};
 >>Gf=frsp(G,w); % computing frequency response;

27

 >>[u,s,v]=vsvd(Gf); % SVD
at each frequency;
 >>vplot(‘liv,lm’,s), grid %plot both singular values and grid;
 >>pkvnorm(s) % find the norm from the frequency response of the
singular values.
 The singular values of G(j w) are plotted in Figure 4.4, which gives an
estimate of G_{¥} »32.861. The statespace bisection algorithm
described previously leads to G_{¥} = 50.25±0.01 and the corresponding MATLAB command is
 >>hinfnorm(G,0.0001) or linfnorm(G,0.0001) % relative error £0.0001.
 The preceding computational results show clearly that the graphical
method can lead to a wrong answer for a lightly damped system if the
frequency grid is not sufficiently dense. Indeed, we would get G_{¥}_{ }»_{ }43.525,
48.286 and 49.737 from the graphical method if 400,800, and 1600
frequency points are used respectively.

28

