Sale!

EEC 643/743/ESC794 Homework 5 Solved

Original price was: $35.00.Current price is: $30.00. $25.50

Category:

Description

5/5 - (7 votes)

1. Suppose there exists a limit cycle in the following closed-loop system. The reference
input r=0. The nonlinear function is represented by f(u), where 𝑓(βˆ™) is a sign function
(as u>0, 𝑓(𝑒) = 1, and as u<0, 𝑓(𝑒) = βˆ’1). The transfer function of linear part is
𝐺(𝑠) = ΰ¬Ή
௦(௦ାଡ)(௦ାଢ)
. Please determine the frequency, magnitude, and stability of the
self-sustained oscillation (caused by limit cycle). Use Simulink to plot the phase
portrait of the system, and the time response y.
2. Check the controllability of the system below. If it is controllable, design a feedback
controller for the system below of the form u(t)=-KX(t) to place the system
eigenvalues at -1, -2.
𝑋̇ = ቂ
0 1
βˆ’1 0ቃ 𝑋 + ቂ
0
1
ቃ 𝑒
3. Find a linear state transformation to put the following system in canonical form, and
find the transformed system.
𝑋̇ = ቂ
1 1
1 2ቃ 𝑋 + ቂ
1
1
ቃ 𝑒
𝑦 = [5, 1]𝑋 + 0.1𝑒
4. Simulate the nonlinear design in Example 2 in Lecture 14 for a spherical tank of
radius 0.5 meter and outlet opening radius 0.1 m. Let the initial height be 0.1m and
the desired final height be 0.6m. The spherical tank can be illustrated as follows.
5. (For EEC743 and ESC794 students only) Read and summarize pages 7-10 of P.
Kokotovic, β€œThe Joy of Feedback: Nonlinear and Adaptive”, Control Systems
Magazine, vol. 12, no. 3, pp.7-17, June 1992. What are the implications of peaking
for practical non-linear control designs?
𝐴(β„Ž)β„ŽΜ‡ = 𝑒 βˆ’ π‘ŽΰΆ₯2π‘”β„Ž
where h is the water level
height, A(h) is the cross-section
area of the tank at height h, a is
the cross-section area of the
outlet pipe, g is the gravity
acceleration, and u is control
input (u= π‘ŽΰΆ₯2π‘”β„Ž βˆ’ 𝐴(β„Ž)π›½β„Žΰ·¨,
where β„Žΰ·¨ = β„Ž βˆ’ β„Žΰ―—, and ꞡ is a
positive constant).
1991 Bode Prize Lecture
The Joy of Feedback:
Nonlinear and Adaptive
Petar V. Kokotovie
Feedback Everywhere
It is a joy to feel this feedback from so many of you here today.
You just heard from Alan Laub, our Society’s president, about my
quarter of a century in Urbana, Illinois, the birthplace of Hendrik
Bode. Indeed, much of what I know about systems, control and
feedback I learned from my colleagues and students at the
University of Illinois, the conferrer of one of Bode’s honorary
doctorates. Alan hinted that there may be a parallel between this
lecture and the two well known “joy of’ books. Well, yes, insofar
as they suggest that continuous experimentation with recipes and
styles leads to joys which grow with age. My own joy of feedback
has been growing for thirty plus years, ever since a Bode formula
led me to my first little discovery of sensitivity points. This joy is
continuing to grow and I believe that after age infinity there will
be even more joy of feedback.’
Publicize or Perish
In this lecture I will try to tell you why I am so optimistic about
the future of feedback. However, before I do this, allow me to
echo Roger Brockett’s speech last night, about the need to
publicize our contributions. Our profession has contributed not
only to technology, but also to other scientific disciplines. We
don’t pay much attention to this fact, because we are all too busy
discovering new system properties and design techniques. We
rejoice when they lead to safer aircraft, more efficient cars,
The author is with the Electrical and Computer Engineering
Department, University of California, Santa Barbara, CA 93106.
The preparation of this lecture and editing of its text was
supported in part by the National Science Foundation under grant
ECS-9196178, by the Air Force Ofice of Scientific Research
under grant F49620-92-J-0004 and by the Ford Motor Company.
cheaper CD players, etc., but we seldom make them media
successes. Other professions win front pages with press releases
that sometimes are applications or modifications of our results. We
don’t even bother to claim credit for feedback and its many uses!
Feedback is one of the deepest and most inspiring concepts that
our profession has contributed to modem civilization. It has
permeated, at least in vague forms, many scientific disciplines.
There are psychologists who draw feedback diagrams for their
counseling sessions and publish a control theory joumal. There are
“biofeedbacks” and similar phenomena in biological journals. In
the current issue of American Scientist, the body weight regulation
is described as a feedback system. Economists and marketing
experts employ feedback. According to a recent report in Mosaic,
published by NSF, the quantification of cloud feedback is keeping
in suspense meteorologists from several countries. No less than
fourteen models of global warming are competing to reveal
whether the cloud feedback is positive or negative.
We cannot afford to approach such “naive” uses of feedback as
rigor-morticians, with prefabricated mathematical coffins. The
discovery of feedback has its many layers, from “naive” to
qualitative, then quantifiable, and finally, to rigorous. While it is
not a priori clear which of the layers is the richest in content, it
is certain that the qualitative style is more likely to cross
disciplinary borders. How many authors in our joumals dare to be
qualitative, let alone “naive”?
Who Controls Chaos?
A good number of you here are trained as mathematicians and,
at least in this sense, we can say that feedback control has a strong
“‘What happens after age infinity,” asked Len Shaw in a banquet anecdote
at the 1989 CDC. His answer, much simpler than mine, was: “See you in
eltwo.”
June 1992
-.
0272-1708/92/$03.000 1992IEEE
~~ – ~ -~
7
standing in applied mathematics. How about physics? Until
recently, physicists seemed uninterested in feedback. Now, during
the decade of chaos, the situation is changing. A year or so ago,
scientific and popular media informed us that “physicists can
control chaos”!
Have some control engineers in this audience understood
complex nonlinear dynamics, including the so-called “chaos,” to
the point of being able to control them? Of course! What they
have not done is to inform the media that we know both, how to
use chaos for feedback control, and how to use feedback control
to suppress chaos. In their 1989 Automatica paper, Mareels and
Bitmead placed an Australian gumleaf on a chaotic spot in
adaptive control. Their chaotic feedback is stabilizing, remains
bounded and has a deceptively simple form:
11 Uk = –+-.
‘k-l ‘k-2
In a 1992 ACC paper, Abed, Wang and Lee, show how to
suppress chaotic flow that occurred in an experiment reported by
Singer, Wang and Bau in Phys. Rev. Letters, 1991. Their nonlinear
model exhibiting chaos for R = 19 and U = 0 is
x, = -px, +px*
x, = -x,x3 – x2
After a bifurcation analysis, Abed and coworkers reduced chaos
to a stable limit cycle. They achieved this with a feedback
controller consisting only of a linear washout filter and a cubic
nonlinearity:
The simplicity of this “chaos extinguisher” and its systematic
design are fascinating, but I cannot tell you more about them,
because we must finally get to the main topics of this lecture.
Linear Versus Nonlinear
In the first Bode Lecture two years ago, Gunter Stein enriched
us with crisp insights into the linear feedback system properties.
It seems appropriate for the third Bode Lecture to make a similar
attempt with nonlinear feedback, so that, of the three Bode
Lectures so far, the two odd ones be about feedback.
Beyond the Worst Case
Can nonlinear feedback interest an audience conditioned to
expect that most control problems can be solved by neat linear
tools? A long time ago, Richard Bellman used to compare linear
designs of nonlinear systems with a man, who, having lost his
watch in a dark alley, is searching for it under a lamp post.
Today’s linear designs are more willing to confront nonlinearities.
They include nonlinearities as bounded-norm operators residing in
linear sectors. Effects of such nonlinearities are then reduced
either with high-gain or worst-case designs. Numerous papers at
this conference follow this path to achieve robust stability and
performance.
In many situations such a linear design leads to success, and
should be a cause for enthusiasm, but not for claims open to
misinterpretations. For example, one should qualify the claim that
“for unstructured bounded-norm disturbances, nonlinear controllers
don’t offer advantages over linear controllers.” The readers must
be warned that this claim is made for an undisclosed class of
nonlinear controllers and refers only to their worst-case
performance. The performance for less severe and more common
disturbances is usually not discussed. Most bounded-norm authors
agree that for highly structured or parametric uncertainties a
nonlinear controller outperforms the best linear controller. But
how many of them admit that this is also true for unstructured
bounded-norm uncertainties?
In his very important Systems & Control letter of September
1989, Tamer Bapr includes a nonlinear controller as a candidate
for an optimal design with an unstructured bounded-norm
disturbance. He then shows that the worst-case performance
attained by this nonlinear controller coincides with the
performance attained by the best linear design, but that in an open
neighborhood of the worst-case disturbance the nonlinear
controller does uniformly better than the linear controller.
An issue, more critical than the worst-case optimality, is that the
norm bounds on uncertainties depend on system’s operating points
and/or initial conditions. George Zames, a pioneer of input-output
designs, reiterated at the panel session yesterday, that these
designs must be validated ex post facto by making sure that the
designed system never leaves the linear sectors to which it was
confined by the assumed norm bounds. This cautionary note is a
good starting point for the technical part of my lecture.
Lecture Outline
In this lecture I will undertake three tasks. First, I will argue
that, for a cautious design, a nonlinear analysis is needed to reveal
when and why our linear tools fail. Second, I will illustrate a few
emerging nonlinear tools with which we can overcome limitations
of linear designs. Third, I will try to show that some of these tools
can be made adaptive and applied to nonlinear systems with
unknown parameters. I will also point to the emergence of robust
designs for nonlinear “interval” plants.
Of course, you don’t expect me to tackle such ambitious tasks in
a systematic and rigorous way. The best I can do is to select a
particular nonlinear phenomenon, illustrate it by simple examples
and suggest, again through examples, some methods to deal with
the effects of the observed phenomenon. The phenomenon I have
chosen for this purpose is peaking. Among the tools developed to
counteract the effects of peaking is nonlinear damping. Recursive
application of such tools leads to backsrepping procedures. I will
illustrate two new backstepping procedures, one for adaptive
nonlinear designs and the other for observer-based nonlinear
designs. I will comment on how these procedures are being
modified for robust nonadaptive designs. So, the four major
sections of the technical part of my lecture are:
Fear of peaking
Backstepping from passivity
Adaptive and robust backstepping
Observer-based backstepping
Although I will try to mention my sources whenever convenient,
8 /E€ Control Systems
my references will remain informal and incomplete. This lecture
is neither a survey nor a joumal paper. It does not pretend to be
representative of all the major developments in nonlinear control,
but only of some of my recent joint work with:
Ioannis Kanellakopoulos,
Riccardo Marino,
Steve Morse, and
Hector Sussmann.
Most of the ideas and results are theirs, while all the misinterpretations and prejudices are mine. An incomplete list of other
colleagues who have contributed to this lecture includes Eyad
Abed, Tamer Bagar, Bob Bitmead, Joe Chow, Randy Freeman,
Jessy Grizzle, John Hauser, Petros Ioannou, Albert0 Isidori,
Hassan Khalil, Miroslav KrstiC, Rick Middleton, Laurent Praly, Ali
Saberi, Shankar Sastry, Peter Sauer, Eduardo Sontag, Mark Spong,
Gang Tao, David Taylor and Andy Teel. A stimulating amount of
“real life feedback” from Jim Winkelman, Doug Rhode, Davor
Hrovat, Bill Powers, and other colleagues at Ford, has also
influenced certain attitudes expressed in this lecture.
Fear of Peaking
With all its benefits, feedback is not free of risks and dangers.
Some of them, such as the possibility to destabilize neglected high
frequency modes, are common in linear systems, while others are
specific to unmodeled nonlinearities. We will examine only one of
the dangerous nonlinear phenomena, which, although easy to
understand, is not well known.
The BB-Syndrome
To help me introduce the peaking phenomenon, please perform
with me a series of imaginary experiments on a ball and beam
(BB) system in one of your undergraduate laboratories.2 In the
notation of Fig. 1, a reasonable model of this system is
.. mg cos0 2mrr beam: 0 =Z – -r- -8Ou.
mr2+J mr2+J
This model disregards a “jumping ball” so, if you prefer, think
of BB as a bead sliding on a bar. Assuming the knowledge of J,
m and g, and the exact measurements of r, i, 0 and 8, we will let
the control U be the beam angular acceleration 8, rather than the
motor torque Z. With convenient numerical values the state
equations of the BB-system are
ball: X,=x2
x2 = -xj + (x3 – sinx,) + x,xi
X4 = U.
beam: X3=x,
‘When John Hauser, supersonic jet pilot, Shankar Sastry, and I selected the
BB-system for our 1989 CDC paper, John told us that he can feel
nonlinear aircraft dynamics on this toy system. Perhaps even some of those
we saw in Keith Glover’s video?
Fig. 1. The ball and beam system.
You can view this system as a chain of four integrators
“perturbed” by two nonlinear terms. The perturbation term x, – sin
x3 is confined to a linear sector and may seem tractable by
bounded-norm designs. However, this certainly is not the case
with the centrifugal force xIxt which makes it impossible to
describe the BB-system as perturbed by a bounded-norm operator.
This term grows with the square of the beam angular velocity x4
= 8 and is the cause for what I call the BB-syndrome. You will
see this syndrome if you notice that the ball can be stabilized only
through sin x3. However, for h,x:1 > 1, our “control” sin x3 is
weaker than the centrifugal force x,x;. Worse yet: the term x,x:
represents a strong positive feedback which, combined with the
peaking of x,, will lead to instability and make the ball fly off the
beam.
To see the peaking of x,, suppose that the BB-system is
approximated by the chain of four integrators and that a linear
state feedback control is used to place the eigenvalues to the left
of -a < 0. What is the effect of this linear control on system
nonlinearities? Will the dangerous term x,x: be negligible? To
answer questions of this type, Hector Sussmann and I presented a
peaking analysis in a 1991 AC-Transactions paper which, applied
to the four-integrator plant with Re h < -a, proves that, for some
initial conditions on the unit sphere, the state x, necessarily
reaches peak values of order a,. So, if Re h < -10, then the peak
of x, is 1000. I will leave it up to you to imagine the effect of this
peaking on the positive feedback term xIx:.
High-Gain Mirage
The phenomena in the BB-system, although easy to visualize, are
hard to compute. For a simpler example let me go back to a 1986
AC-Transactions note in which Riccardo Marino and I analyze the
peaking caused by the linear feedback control U = -pxl – la2 in a
second order system
1, = x1 + w
1 13 X =U+ -xi= -k2x, -kx2 + -x2. 3
23
In an attempt to reduce the effect of a bounded disturbance w on
the output y = xI, we increased the gain k. We expected that this
increase would also reduce the effect of the nonlinearity x,3/3. Our
June 1992
-_ .
hope was that for larger k the linear term kx, is more likely to
dominate the nonlinear term x,3/3. But our hope turned out to be
a high-gain mirage!
In reality, the increase of k led to a decrease of the stability
region because of the peaking in x,. Exact calculations showed that
all the solutions with initial conditions such that
1 kx:(0) + -~;(0)>3~
k
escape to infinity. You can easily sketch this “escape set” and see
that its boundary along the x,-axis is f ldk. This will tell you
that the region of local asymptotic stability vanishes as the
feedback gain k increases!
Caveat
My description of the BB-syndrome and the high-gain mirage
ends with a message:
Achieving local stability
without a safeguard against
peaking is dangerous.
Theoretically, such a danger does not exist in the case of global
stability. Since the stability properties of linear systems are always
global, most linear designs ignore the danger of peaking.
Every sensible feedback design must guarantee a stability region
R. For this purpose we sometimes use the concept of semiglobal
stabilizability. We call a system semiglobally stabilizable to an
equilibrium xe by means of a class F of feedback controls, if for
every bounded set L2 of the state space there exists a control in F
that makes xe asymptotically stable, with R belonging to its
stability region. As my simple examples show, growth rates of
nonlinear terms and linear peaking phenomena are among the key
factors in determining whether a nonlinear system is semiglobally
stabilizable or not.
Peaking in Cascades
Recent developments of the geometric theory of nonlinear
control, summarized in Albert0 Isidori’s superb 1989 book
Nonlinear Control Systems, allow us to present nonlinear systems
in cascade forms like
The nonlinear part of this cascade is unobservable from 5, so that
.i = fo(x) describes the zero dynamics of (CF). When x = 0 is a
globally asymptotically stable equilibrium of x = fo(x), it would
seem that the whole cascade can be globally stabilized with <-
feedback only, that is with U = e. This expectation is based on
the fact that the exponential decay of Ib(t)ll 5 ce-“‘ can be made
as fast as desired by the choice of feedback gain K. It would seem
that, with y(t) rapidly decaying, the stability of .i = fo(x) will be
preserved. To see that, in general, this idea is false, let’s examine
the system
where, like in the BB-syndrome, the term may introduce
positive feedback. With the 6-feedback alone, say U = -6, we
have 6(t) = he-‘ and the x-subsystem is
x = -x + 2 Le-‘.
With x(0) = x, the explicit solution is
x(t)= LO
(2 -xOko)e ‘ + xo6,e -‘
Now you can see that for xoL > 2 the state x(t) tends to infinity
in finite time! So, the nonlinearity 2 is dangerous even when
multiplied by Le-‘. We can reduce this danger using U = -at
instead of U = -5. Then the stability region is xo& < a + 1. It is
semiglobal, because we can make it as large as desired by
increasing the gain a, without any peaking in 5. However, this
high-gain idea fails in the following example:
x=-x+5$
tl = 5, <, = U.
If in this system you use the <-feedback U = -a2c1 – 2ac, so that
h, = & = -a, you should expect that the danger of 2 increases as
the gain a increases, because 5,(t) is peaking with a. In fact, for
x(0) = no, C1(O) = 50, 5,(0) = 0, it is easy to calculate that x(t)
escapes whenever xob > 2/a. So, by increasing a to speed-up the
decay of &t), we reduce the stability region of x. As in the highgain mirage, the stability region vanishes as a + =.
Backstepping from Passivity
After so many examples of dramatic instabilities, you may
wonder what happened to our joy of feedback? It will grow as we
learn more about nonlinear designs which prevent disasters caused
by peaking and achieve global or semiglobal stabilization. Let me
start with full state feedback designs.
Passive Designs
To prevent the instabilities in a cascade (CF), we need to
investigate which linear-nonlinear connecting terms propagate the
effects of the peaking phenomena and how to counteract them.
Sussmann and I have initiated such investigations in a 1989
Systems & Control letter and continued them with Saberi in a
1990 SIAM Joumal of Control paper. Our result is that for cascade
forms (CF) global stabilization with full state feedback is possible
if the linear part of the cascade is weakly minimum phase, with
arbitrary relative degree, and if a connection restriction is satisfied.
Although only sufficient, these conditions are in a particular sense
close to being necessary.
When the linear part of the cascade (CF) is of relative degree
one, our design starts by finding a K to satisfy the well known
positive real condition:
10 IEEE Control Systems
(A + BmT P + P (A + BK) = -Q and PB = CT
for some P > 0 and Q 1 0. Then, assuming that V(x) is a
Lyapunov function for i = fo(x), our feedback control for (CF) is
The basic form of the backstepping procedure is best explained
on an example of a system in “strict feedback form”:
The global asymptotic stability property of the resulting feedback
system is established using the Lyapunov function:
W(X) = V(x) + G’Pk.
Let us illustrate this design on the above third-order system in
which the escape of x was caused by the peaking in k2. Now a
global stabilization of this system is easy. For y = k2 we have C
= [0, 11 and the condition PB = Cr is satisfied with P = I. Then K
= [-1, -I] yields Q Z 0, and using V(x) = 2, our globally
stabilizing control is
U = -5, – 5, – 2.
The resulting feedback system is
x = -x + 58
e1 = 52
t2 = -kl – k2 – x’.
We say that the nonlinear term -x3 provides nonlinear damping
which counteracts the peaking and prevents the escape of x.
For connoisseurs of passive charms, Parks, Landau, Anderson,
Narendra and others, the idea of our design is dkju vu. In a 1990
Automatica paper, Ortega extended it to cascades in which the first
subsystem is also nonlinear and can be made passive by feedback.
A geometric characterization of systems that can be made passive
by feedback was given in a 1991 IEEE Transactions on Automatic
Control paper by Bymes, Isidori and Willems.
Backstepping
As much as we enjoy the simplicity of passive designs, we must
not forget that passivity restricts the system’s relative degree not
to be higher than one. Fortunately, several versions of a recursive
procedure, called backstepping, are being developed to remove this
relative-degree restriction.
The key idea of backstepping is to start with a system which is
stabilizable with a known feedback law for a known Lyapunov
function, and then to add to its input an integrator. For the
augmented system a new stabilizing feedback law is explicitly
designed and shown to be stabilizing for a new Lyapunov
function, and so on …
This idea is so simple that most of you have probably used it
without paying much attention to it. It is, therefore, surprising that
this idea has become an explicit tool for systematic nonlinear
design only very recently. At the risk of being unfair to many
other authors, let me mention the 1988-1991 works of Tsinias,
Sontag, Bymes and Isidori, and my already quoted papers with
Sussmann and Saberi which contain many other references.
Step 1. Imagine that we can use x2 to stabilize at 0 the first
equation with a feedback law a,(x,) so that dV,/ax, f,(x,, a,(x,))
< 0, for all x, # 0, where V,(x,) is a known Lyapunov function.
Note, however, that we can achieve x2 = a,(x,) only with an error
z2 = x2 – a,(x,). Let us also denote z, = x, so that x, and x, are
known explicit functions of z, and z2 and vice-versa. We now
rewrite the first two system equations as
where cpI (z,, z,) is known, because f, is assumed to be
differentiable and is expressed as
Another key observation is that b is also known explicitly:
aa, , aa
ax, ax, &, = -x = 2f,(Xl,X2)
Step 2. Imagine now that we can use x3 to stabilize at 0 the
above (x,, x2)-system with a feedback law q(z,, z2). To design cl,
we first construct a Lyapunov function:
With xj = a2(zlr z,) we want to make V2 negative:
Recall that the first term was made negative in Step 1, so we
choose a, to make the expression multiplying z2 equal -z2:
June 1992
_- –
However, since we cannot achieve x3 = a2(z,, z,), there is an error
z3 = x3 – a,(zI, z,) and the actual V, is
We will take care of z2z, in Step 3. Since we know z,, z2 and z, as
functions of x,, x2 and x3 and vice-versa, our system can be written
as
where P,(z,, z2, z3) is the known expression for $.
Step 3. At this final step there is no need to imagine a fictitious
control, because the actual control U is at our disposal. A feedback
law for U is now chosen to make the derivative of V, = V, + 112
z: negative
The main achievement of our efforts is not only that the first two
terms are negative, but also that the remaining terms have z, as a
common factor. This is crucial, because now the choice of
feedback
makes the last two terms in V3 equal to -z: and guarantees that V3
< 0 for all nonzero z,, z,, z3. So, we have achieved the desired
global asymptotic stability property of the equilibrium at 0. The
designed feedback law can now be expressed as a function of x,,
x2, x, and is ready for implementation.
This procedure is explicit and its results are global when the
system is in the special “strict feedback’ form. In the case of a
more general “pure feedback” form, the results may not be explicit
or global, but a nonvanishing region of stability is still guaranteed.
In the above simple example, Step 1 of backstepping was used
to stabilize a scalar equation. In more interesting applications, the
backstepping procedure may start with a higher order subsystem
for which one of the external state variables can be used as a
fictitious stabilizing control. In particular, Step 1 may consist of
a passive design, as discussed above. In a 1992 Transactions on
Automatic Control note, Lozano, Brogliato and Landau give a
passivity interpretation of each step of our procedure.
Someone just asked if backstepping is applicable to systems
which are not feedback linearizable. Yes! For example, you can
use it to globally stabilize the system
x, = x,x2′
which is not controllable at zero. Start with a, = -xIk, where k >
1, say k = 4/3 or k = 2, so that a, is differentiable. Your design
will be in two steps. If you use k = 4/3 you can compare your
solution with the one on page 319 of the 1986 Academic Press
book on Singular Perturbations, by myself, Khalil and O’Reilly.
This will show you that the colorful pedigree of backstepping
includes singular perturbations.
Saturating Feedback
Let us backstep to the BB-system and examine if with
backstepping we can achieve its semiglobal stabilization.
Unfortunately, the critical term x,xt appears in the R,-equation and
the BB-system is not in the form to which one of the existing
backstepping procedures can be applied.
The BB-syndrome is one of several benchmark examples–challenges for new nonlinear and adaptive designs. At the
Nonlinear Workshop last October in Santa Barbara, Andy Tee1
responded to the challenge with a control saturation design that
keeps the dangerous peaking terms within prescribed bounds and
thus achieves semiglobal stabilization. In a 1989 ACC paper,
Esfandiari and Khalil employed a similar saturation idea to
counteract the effects of peaking in high-gain observers.
A benchmark system, to which a backstepping design does not
apply, is
1, = x2 + xjz
x2 = x,
X) = U.
The main difficulty is the presence of x: in the first equation,
which is not in a “pure feedback form.” With z, = x, + x2 + x3,
Teel brings U into the first equation
z, = x2 + x, + xjz + U
and then designs the feedback
U = -x2 – X, – SAT (z,)
where SAT is the usual saturation characteristic, linear in an
interval centered at 0 and constant outside this interval. The
resulting feedback system is
i, = – SAT (z,) + x3 + x;
x, = x,
i3 = -x2 – X) – SAT (z,),
It is clear that the potentially dangerous term x:is now bounded,
because the linear subsystem is asymptotically stable and its input
SAT (z,) is bounded. A further analysis proves global asymptotic
stability. Using a similar approach, Teel shows that the saturating
feedback
x, = U U = -4~~ – 4×4 + SAT(-4xI – 12×2 + 9×3 + 2×4)
/€€E Control Systems
achieves semiglobal stabilization of the BB-system. It is amazing
that the stability region includes initial conditions with the beam
in a vertical position!
Adaptive and Robust Backstepping
There are many systems with nonlinearities known from physical
laws, such as kinematic nonlinearities, or energy, flow and mass
balance nonlinearities. Some of these nonlinearities may appear
multiplied with unknown parameters and give rise to the problem
of controlling nonlinear systems with parametric uncertainty. For
a broader class of systems, the nonlinearities themselves may be
unknown. Such difficult problems may still be tractable if the
uncertainties are within some known nonlinear bounds, the socalled nonlinear interval uncertainties.
Many exciting results have been obtained in this area in the last
three years. Although output feedback results are beginning to
appear, I will discuss only a couple of state feedback designs: an
adaptive design for parametric uncertainty and a robust design for
interval uncertainty.
Adaptive Backstepping
Adaptive state-feedback control of nonlinear plants has a short
but eventful history which involves the names of Taylor, Marino,
Kanellakopoulos, Sastry, Isidori, Arapostathis, Nam, Praly, Pomet,
Campion, Bastin, Morse, and many others. Several breakthroughs
by some of these authors were presented during the 1990 Grainger
Lectures, now available as Part Two (more than 200 pages) of
Foundations of Adaptive Control, published by Springer. One of
these breakthroughs was the adaptive backstepping procedure,
developed by Ioannis Kanellakopoulos. At different stages of this
development his coauthors were Marino, Morse and myself. The
procedure was simplified by Jiang and Praly, and was brought to
its present tuning function form by KrstiC and Kanellakopoulos.
I will explain adaptive backstepping on a benchmark nonlinear
system:
x, = xi
x, = U.
Note that the destabilizing term Ox,’ is now more dangerous than
before, because the parameter 8 is unknown. We assume,
however, that 8 is constant. For this system the adaptive controller
design is in three steps.
Step 1. We introduce zI = xi and z2 = x2 – aI and consider a, as
a control to be used to stabilize the z,-system with respect to the
Lyapunov function VI = 1/2 z12 + 1/2(8 – e)2. The z,-system and
the corresponding VI are
i, = z2 + a, + 8~(~,) – (8 – e)h(z,)
The tuning function,z, = z,h(z,) would eliminate 8 – 8 from VI
via the update law 8 = T,. Then, if z, = 0, we would achieve VI
= -2,’ with a, = -zI – &(z,).
Instead of using 8 = z, as an update law, we just substitute
zl(zl) and a,(z,, 8) into VI, and obtain
VI = -z12 + zlz2 + (8 -8) (8 -TJ
Only the term -z,* is negative as desired. In the subsequent steps
we must take off the other two terms.
Step 2. Introducing 2, =x, – a,, we consider a, as a control to
be used to stabilize the (z,, z,)-system
For the augmented Lyapunov function V2 = VI + 112 222, let US
examine V, = VI z,i2 term by term:
We could eliminate 8 – 0 from V, using the update law 6 = T,,
where
Then, to make V, = -z12 – z,’ when z, = 0, we would design a,
such that the bracketed term multiplying z2 equals -z2, namely
where T? replaces 8. However, we do not use 8 = T, as an
update law, but retain T,(z,, z,, 8) as our second tuning function.
Substituting a, into the expression for V, we obtain
While -zlz – 222 is negative as desired, in Step 3 we must take care
of the remaining terms.
Step 3: With z, = x,, z, = x2 -a,, z3 = x3 – a,, the original
system has been transformed into
June 1992

13
We now design an update law 8 = z, and a feedback control U
to globally stabilize this system with respect to V, = V, + 1/22;.
To this end, we examine V, = V, + z3z3 term by term:
where
To eliminate 8 – 8 from V,, we choose the update law
8 = ~,(ZI, 22, 23, 8) = 227 8) – Z~O(ZI, 22, 8).
Substituting 8 – z, = -z30(zI, z,, 8) into V,, we obtain
r
Finally, we choose the control U such that the bracketed term
multiplying 2, becomes -z3, namely
aa, A aa,
a8 221
= 23 – 2, – Z,-O (z,,z,,e)- -(z, – 2,)
We have thus reached our goal of global stabilization, because
V — 21, – 2; – 232, , –
which means that the equilibrium x, = 0, x2 = 0, x, = 0, 8 = 8 of
the original system with the update law for 8, is globally stable.
It is easy to see that we have also achieved the regulation of x,
namely xl(t) + 0, x,(t) 0, x,(t) + 0 as t + w.
Robust Backstepping Designs
By now you have seen the backstepping idea applied to two
different state feedback designs. In both cases we were able to
enlarge our design model step-by-step and recursively calculate
stabilizing feedback controls. For nonlinear plants with interval
uncertainties several new designs are being developed by Marino
and Tomei, Praly and Jiang, Spong, Freeman and
Kanellakopoulos, among others. For a glimpse into this new
research area, I will use an example from a recent paper of my
student Randy Freeman. One aspect of this example illustrates
how a backstepping procedure can remove restrictive matching
conditions made in the early results of Leitman, Corless, Barmish
and others. In the second-order plant
R, = x2 + ex,,
x, = U,
we now assume that the-unknown parameter 8 belongs to a
known interval, say I 81 < 8.
Our goal is to design several static feedback alternatives to
adaptive controllers which always include parameter update
dynamics. To apply backstepping, we first design a v-controller for
x, = ex,’ +
with VI = 1/2 xIz. This first order system satisfies the matching
condition and several different designs are possible. Among them
are the following two designs:

e 1, C: v=aC(x,)=-c,x, –(a, +-x,)
S: v=as~x,~=c,x,-x~Gs,~xl~,
2a
where c, and a are positive design parameters and sI(xI) is a
continuous approximation of a switching function. It is easy to
verify that VI 5 -clx12 with the first v-controller. For the second
v-controller
and you can make your own choice of sI(xI) to achieve VI < -c1/2
xI2. Thus, a,(xl) and a,&) are both smooth globally stabilizing
control laws for the first order subsystem. We now employ
backstepping to design u-controllers for the second order system
with respect to V, = VI + 1/2(x, – v)’. Note that we again have
several possibilities, including C and S, to make Vz negative. The
corresponding controllers CC, CS, SC and SS will all be globally
stabilizing. For example, both CS and SS will be of the form:
av U = -XI – c2(x2 – v) + xz121 (s (x ,x )x2 +x,),
and they will differ only in their expressions for v. As before,
s2(xl,x2) is a smooth approximation of a switching function. If at
either of the two steps we instead employ an adaptive update law,
denoted by A, then the set of possible controllers will expand to
include SA, AS, CA, etc. These controllers may exhibit different
transients and possess different robustness properties.
The message from this simple example is clear: we should not
and will not be dogmatic about a single approach to nonlinear
robust control. On the contrary, the backstepping methodology has
created the possibility of many competitive classes of nonlinear
controllers. A fascinating research topic is not only to design such
14
~-
/€E€ Contro/ Systems
controllers, but also to find performance and robustness criteria to
select the winners in their competition. This task is difficult.
Fortunately, the difficulty is caused by abundance, rather than
scarcity, of ideas for controller designs.
Observer-Based Backstepping
I can see some skeptical faces here in the front rows. Surely you
don’t doubt the state feedback results I just presented. They are
simple enough and I hope that their correctness has been
established even through my informal presentation. What you are
skeptical about is the assumption that the full state is available for
measurement. It hardly ever is. So, if we want some applicable
results, we must address the output feedback problem, when only
some of the states, constituting our output, are being measured.
For nonlinear systems, the output feedback problem has been a
much bigger challenge than for the linear systems. For the purpose
of this lecture I will divide this challenge in two parts. The state
estimation part of the challenge, even in the noise-free setting, is
that it is hard to design nonlinear observers with guaranteed
convergence properties. The second part of the challenge is that
there are fundamental difficulties even when a convergent
nonlinear observer is available. The remainder of my lecture will
be devoted not to the progress in the design of nonlinear
observers, but to the advances made in observer feedback design
when an observer is available.
Observer Induced Peaking
Suppose that you know a full state feedback u(x) that would
stabilize your plant. Suppose, moreover, that you have an observer
which can give you exponentially convergent estimates 1 of the
states x, so that
Let x = 0 be the globally asymptotically stable equilibrium of your
plant with full state feedback:
Can anything go wrong if instead of u(x) you use the
implementable “certainty equivalence” control U@)? The same
plant controlled by U(.?) can be written in the perturbed form as
The equilibrium is still x = 0, but what about its stability
properties? In contrast to what we know about linear systems, the
global stability property of x = 0 in the above nonlinear system
may be destroyed by the exponentially decaying estimation error
term U@) – u(x). This should come as no surprise to those of you
who followed my peaking discussion earlier in the lecture.
We are facing the same peaking phenomenon, except that now
it is due to the peaking in some of the components of the
estimation error 6 = x – 2. Let me illustrate it on an example
which is by now familiar to all of you
XI = -XI + xg12 + U
x2 = -x2 + XI2, y = XI.
Here I assume that only y = xI is available for feedback. If both
xI and x2 were available, then U = -xgI2 would make the
equilibrium xI = x2 = 0 globally asymptotically stable. Let’s
investigate what happens when in the same control law we replace
x2 with its estimate f2 obtained from the exponentially convergent
“observer”
.t2 = -2, + XI?,
With U = -1g12 the first equation of the plant becomes
where \(t) = x2(t) – .?,(r) = k(O)e-‘. You recognize in it the same
equation for which we have established that x,(t) escapes to
infinity in finite time whenever 6(0)xl(O) > 2!
From what you saw about the peaking phenomena before, you
would be able to imagine higher order examples in which some
state estimates peak with the observer gain and also multiply some
dangerous nonlinearities. You would then see that, if you increase
the observer gains for faster convergence, the stability region will
shrink, rather than increase. In other words, there are situations in
which an exponentially convergent observer causes the loss of not
only global, but also semiglobal, stability of the equilibrium x =
0.
Nonlinear Damping
In my earlier examples, the effects of peaking were counteracted
by specially designed nonlinear damping terms. Let’s try the same
idea again by designing an extra term v added to the “certainty
equivalence” control:
U = -.?g12 + v.
With this control and the same “observer” as above, the relevant
equation of the plant and that of the estimation error are
x, = -XI + kl2 + v
E, = -6.
We now want to make the derivative of V = 112 xI2 + 112 5′
negative:
\i = -x12 – 6′ + 5.,’ + XIV.
To achieve V < 0, we can let v be a function of xI, but not of 6,
because 6 = x2 – 1, is not available for feedback. So, to enhance
-xI2 we let v = -X,W and rewrite V as
Now, we simply make the 2 x 2 matrix positive definite by the
choice o = xI4 and, hence, our nonlinear damping term is v = -x,~.
June 1992

15
The resulting system with the observer plus nonlinear damping
feedback is
i, = -XI + x$12 – &XI2 – x,5
1, = -x, + x,2
1, = -f2 + x,2.
The equilibrium x, = x, = 2, = 0 of this system is again globally
asymptotically stable, as with the full state feedback.
Observer Backstepping
Can the success with the preceding simple example be extended
to other nonlinear systems for which exponentially convergent
observers are available? A major breakthrough in this direction is
due to Marino and Tomei and their method of filtered
transformations, as you can see from their two papers at this
conference and the references therein.
Also presented at this conference is an altemative path via
observer backstepping by Kanellakopoulos et al. Let’s follow this
path and design an observer-based feedback for the plant
1, = U, y = XI
in the case when y is required to track a given signal y, with
known y, and y,. In the above system the nonlinearities depend
only on the output and a Krener-Isidori type observer is available
Z3 = U + k3(2, – y).
Note that, after the cancellation of ~p,(y) and (p2(y), this results in
a linear estimation error system = AC, where 5 = x – Z and A
is made to satisfy PA + ATP = -I by the choice of k,, k, and k3.
We are now prepared for an observer-based backstepping design.
Its idea is to perform backstepping in the observer and, at the
same time, to account for the destabilizing effect of the estimation
error by designing nonlinear damping terms.
Step 1. For the tracking error z, = y – y, we get
In this equation we let Z, = z2 + a, and design a, = -2, – q,(y)+
y, which is implementable and yields
2, = -2, +z2+5,
=2 -b =f3+P –e2, aa
ay 22 I
where the two important terms in the 2,-equation are given
explicitly, while all other terms are incorporated in p,, which is
known and implementable.
Step 2. Now we let .f3 = z3 + a, and use a, to stabilize the z2-
equation with V2 = 1/2 2; + cP5. Since now we must counteract
the effect of 5, which multiplies a nonlinear term, we let
a, = -z2 – P2 – z2w2
where z2w2 will be a nonlinear damping term. To design w2 we
substitute a2 into V,:
and make the 2 x 2 matrix positive definite by the choice w2 =
(da,/ay)2. This completes the design of a, as
Now V2 is made of a negative definite part plus the term ~$3,
which will be absorbed in the final step.
Step 3. Our final task is to design a feedback law for U to
globally stabilize the z-system
z, = -2, + 2, + 5,
i =f -& =u+p – aa252 37F β€˜3 3 2
where all the terms incorporated in p3 are known and
implementable. Now it should not be difficult to see how to
choose U to make the derivative of V3 = V2 + 1/2 z: negative:
This feedback control adds a nonlinear damping term to counteract
the effects of peaking on z3.
We have thus completed one more backstepping design which
uses an existing observer and counteracts its destabilizing effects
by nonlinear damping terms. It is fascinating that similar design
procedures are being developed for systems with unknown
parameters (adaptive) and with interval uncertainties (robust). I
hope that this lecture will motivate you to read about these designs
in the 1991 CDC papers by Marino and Tomei and Kanellakopoulos et al.
16 I€€€ Control Systems
~
More Joy of Feedback Acknowledgment
This lecture will have accomplished one of its goals if after it
you share not only my fear of peaking but also my joy of being
able to overcome it, at least for some classes of nonlinear systems.
Of course, these are not the only classes of systems in which we
need to counteract peaking, nor is the peaking phenomenon the
only danger in nonlinear control. There are many more tough
feedback problems ahead, and there will be more joy of inventing
methods to solve them. I am using the word inventing, rather than
developing, because I hope that the spirit of invention will
continue to grow in our profession. Some simple feedback
inventions, like backstepping and saturating controls, may have
far-reaching practical consequences and stimulate the development
of new theories. These theories are likely to encompass practically
important classes of nonlinear systems and increase the impact of
our results. We will, of course, widen their applicability by
approximations, simplifications, robustifications and other
marvelous arts of engineering.
The author thanks the 1991 CDC organizers and the generous
1991 CDC host Derek Atherton for providing a sound tape of the
lecture which, after the ear-splitting efforts of Robin Jenneve,
Claudia Leufkens, Heather Simioni and Dawn Zelmanowitz, was
transcribed onto this text.
Petar V. KokotoviC received graduate degrees from the University
of Belgrade, Yugoslavia, in 1962, and from the Institute of
Automation and Remote Control, U.S.S.R. Academy of Sciences,
Moscow, in 1965. From 1966 until March 1991, he was with the
Department of Electrical and Computer Engineering and the
Coordinated Science Laboratory at the University of Illinois,
Urbana, where he held the endowed Grainger Chair. In April
1991 he joined the Electrical and Computer Engineering
Department of the University of California, Santa Barbara, as a
co-director of the newly formed multidisciplinary Center for
Control Engineering and Computation. In I990 he received the
IFAC Quazza Medal – the highest award giver; by the
International Federation of Automatic Control that has been given
triennially since 1981.
Out of Control
Mistakenly
,
believing that he’s been offered the electrical department’s chail; Pro$ Fenwick proceeds to take on
his new job amidst the faculty members ’enthusiastic cheers.
June 1992
~-
17