Spacecraft Momentum Management and
Attitude Control using a Receding Horizon
Approach
James Fisher, Raktim Bhattacharya, and S.R. Vadali
This paper presents the application of the Receding Horizon approach
to the spacecraft momentum management problem. Attitude control of a
satellite in a circular orbit under the influence of constant disturbances is
considered. Control designs are demonstrated for a simplified planar case
as well as for the full dynamics of the spacecraft. The response of the reced-
ing horizon controller is compared to a controller derived from Lyapunov
conditions. The receding horizon approach demonstrates its ability to align
the spacecraft to a torque equilibrium attitude and regulate the momentum
of the control even when torque and rate constraints are included in the
problem.
I. Introduction
Spacecraft attitude control has been an important area of research and has received a
large amount of attention. Spacecraft dynamics provide a rich area of research because their
behavior is nonlinear and they are in many cases to operate over a large range of operating
conditions. A common choice of actuators to control the satellite is the momentum exchange
device. Momentum exchange devices such as control moment gyros (CMG’s) or reaction
wheels maintain control of the satellite by transferring momentum from the spacecraft so
that it follows a designed trajectory. If a constant disturbance such as an aerodynamic torque
acts on the spacecraft, the momentum in the control can build up as result of the constant
control torque required to reject the disturbance. These momentum exchange devices can be
desaturated by the alignment of the spacecraft with a torque equilibrium attitude (TEA).
When such an attitude is obtained, the gravity gradient torque and gyroscopic torques
balance the disturbance torque to create an equilibrium attitude. No control input is required
to maintain this attitude, allowing the momentum to desaturate. For a spacecraft in a
1 of 22
AIAA Guidance, Navigation and Control Conference and Exhibit20 - 23 August 2007, Hilton Head, South Carolina
AIAA 2007-6442
Copyright © 2007 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
circular orbit this attitude is constant throughout the orbit and can therefore be maintained
with no control effort. Traditional approaches to this problem such as those in1–3 involve
linearization of the dynamics about the local-vertical-local-horizontal (LVLH) frame. The
controllers by Wie1 utilize linearized control laws to seek the TEA for the pitch and yaw
axes and maintain bounded oscillation in the roll axis in the face of constant and periodic
disturbances. Similar results are shown in Sunkel et al.4 In these results, an LQR design
of the linearized system is used to determine a control law that can align the satellite with
the TEA and unload the momentum in the system. While these control laws are optimal,
they are optimal only about the linearization point. This means that unless the equations
of motion are linearized about the TEA, the control is no longer optimal. Nonlinear control
methods have also been applied to this problem with some success. Several nonlinear control
laws are presented in Vadali and Oh5 which are valid within some region of the equilibrium.
These control laws are stable when no disturbance is present. No proof is presented for the
stability of the system subject to constant disturbances, although system responses show
bounded behavior in these cases. A feedback linearization technique was also utilized by
Singh and Bossart6 to perform momentum management. More recently, work has been
conducted utilizing classical adaptive control as well as an adaptive approach which utilizes
neural networks.7,8 These approaches are able to guarantee the stability of the system
by tuning parameters to account for the imperfect knowledge of the system, but have no
guarantee of optimality.
One approach that has not made an appearance into the literature concerning momentum
management of spacecraft is Receding Horizon Control. Receding horizon control, also
known as model predictive control, has been popular in the process control industry for
several years.9,10 It is based on the simple idea of repetitive solution of an optimal control
problem and updating states with the first input of the optimal command sequence. The
repetitive nature of the algorithm results in a state dependent feedback control law. The
attractive aspect of this method is the ability to incorporate state and control limits as
hard or soft constraints in the optimization formulation. When the model is linear, the
optimization problem is quadratic if the performance index is expressed via a L2-norm, or
linear if expressed via a L1/L∞-norm. Issues regarding feasibility of online computation,
stability and performance are largely understood for linear systems and can be found in.11–14
For nonlinear systems, stability of RHC methods is guaranteed by using an appropriate
control Lyapunov function.15,16 For a survey of the state-of-the-art in nonlinear receding
horizon control problems the reader is directed to references.17
This work will present a receding horizon design for attitude and momentum management
control of a spacecraft and will examine simulated responses for the spacecraft when in the
presence of unknown constant disturbances. Several implementations will be introduced and
2 of 22
the results will be compared to a nonlinear controller presented by Vadali and Oh.5
II. Equations of Motion
A. Full dynamics
We will consider a rigid-body model of a satellite under the control of a momentum exchange
device, such as reaction wheels. For the purposes of this work, it is not important to discuss
the type of device and its properties. For our equations of motion, we will define several
sets of coordinate axes. The axes we will define are the inertial reference frame (ECI) which
will be denoted by {n}, the local vertical local horizontal orbital frame (LVLH) which will
be denoted by {o}, and the spacecraft body frame which will be denoted by {b}. There are
several definitions of an LVLH frame. The definition used in this work follows the diagram
shown in figure 1. The o1 axis corresponds to the radial direction of the spacecraft from the
o1
o2
o3
orbit
V
o1
o2
o3
orbit
V
Figure 1. Local vertical, local horizontal frame definition
body it is orbiting, the o2 axis is along the velocity vector (for circular orbits), and the o3
axis is normal to the orbital plane. For this problem, these three frames are the only frames
required. To analyze the attitude of the spacecraft body frame, we will define the following
transformation that relates the coordinate axes of {b} to those of {o}. To represent the
attitude of {b} with respect to {o}, a 3-2-1 Euler angle sequence is used. If we denote the
first rotation angle (about the o3 axis) as θ3, the next rotation angle (about the new y-axis)
as θ2, and the final rotation angle (an x-axis rotation) as θ1, we can express the direction
3 of 22
cosine matrix defined by this Euler angle set as
C =
cos θ2 cos θ3 cos θ2 sin θ3 − sin θ2
sin θ1 sin θ2 cos θ3 − cos θ1 sin θ3 sin θ1 sin θ2 sin θ3 + cos θ1 cos θ3 sin θ1 cos θ2
cos θ1 sin θ2 cos θ3 − sin θ1 sin θ3 cos θ1 sin θ2 sin θ3 + sin θ1 cos θ3 cos θ1 cos θ2
(1)
For attitude representation and control of the dynamics we will need to express the evolution
of these angles as a function of time and their relationship to the angular velocity of the
body. This can be obtained from the angular velocity of {b} with respect to {o} as expressed
in the {b} frame.
ω − ωf =
1 0 − sin θ2
0 cos θ1 sin θ1 cos θ2
0 − sin θ1 cos θ1 cos θ2
θ1
θ2
θ3
(2)
With this information it is possible to write the equations of motion in terms of derivatives
of these Euler Angles. To find the kinematic relationship governing the evolution of the
angles, we must invert the matrix. This exhibits a singularity, for θ2 = π2. Because we are
concerned with motion about the LVLH frame, we need not worry about singularities (unless
our control methodology fails and then we have bigger problems). The term, ω, represents
the angular velocity of the spacecraft body frame, {b}, with respect to the inertial frame,
{n}, and ωf represents the angular velocity of the LVLH frame, {o}, with respect to {n}.The difference between these angular velocity vectors (ω−ωf ) is the angular velocity of {b}relative to {o}. In this work, the orbit is assumed to be circular with orbit rate, n. The
angular velocity, ωf is the orbit rate. When expressed in the {b} frame, this can be written
as
ωf = C
0
0
n
(3)
In this work, we will not usually specify the frame in which a vector is represented. Instead,
we will write equations that are valid in any frame. The important consideration will be
the frame with respect to which a time derivative is taken. When we take the derivative of
ωf , we take this derivative in the {o} frame as this vector is constant in this frame. The
derivative of ω, or ω will be taken with respect to the spacecraft body frame, {b}. The term
4 of 22
ω represents the skew symmetric matrix form of the cross product or
ω =
0 −ω3 ω2
ω3 0 −ω1
−ω2 ω1 0
(4)
The dynamics of the spacecraft are obtained from the simple rigid body equations of motion
Iω = −ω × (Iω + h) + Tgg − u + Td (5)
h = u (6)
The rigid body moment of inertia of the spacecraft is denoted by I. The control is accom-
plished through the torque, u. The term, Td, represents a disturbance torque. We will
discuss this term in more detail later. The angular momentum of the control is given by h.
Because the spacecraft is in an orbit about a celestial body, it will have gravitational forces
action on it. Because the satellite is not spherical, there will be a resulting gravity gradient
torque given by Tgg. This torque can be written as
Tgg = 3n2o1 × Io1 (7)
The vector, o1, represents the unit vector from the center of the coordinate axis, {n}, to
the center of mass of the satellite. In the LVLH frame, this corresponds directly to the
coordinate axis, o1. When Tgg is expressed in the body frame, o1 can be written as the first
column of the C matrix, or C1.
B. Planar Motion
To demonstrate some of the properties of the control law, a simplification of the dynamics
will be created by restricting the motion of the satellite to lie in the orbit plane. If the
spacecraft is restricted to move solely in a plane and its orbit is assumed to be circular, the
orientation of the spacecraft with respect to the LVLH frame can be represented by a single
angle, θ and its relative angular velocity can be described by the derivative of this angle, θ.
The total angular velocity of the spacecraft is given by ω = (n + θ)o3. Because the problem
is restricted to be planer, the total angular momentum of the satellite as well as that of the
control lie along the same axis as the angular velocity, which greatly simplifies the equations
of motion. This means that
ω × (Iω + h) = 0 (8)
5 of 22
Furthermore, the gravity gradient torque has the form
Tgg = −3
2n2(I2 − I1) sin (2θ)o3 (9)
These simplifications result in the following equations of motion.
I3θ = −3
2n2(I2 − I1) sin (2θ)− u + d (10)
h = u (11)
In the above expression, the disturbance torque is represented by d. Like the equations
of motion for the full system, these display oscillatory behavior. These planar equations
give more insight into the dynamic equations. They show that for the planar case, there is
no damping in this system and that the attitude dynamics exhibit an oscillation at some
frequency proportional to the orbit rate of the satellite.
III. Torque Equilibrium Attitudes
The objective of the control strategy is to automatically find equilibria of the satellite
dynamics which we will term torque equilibrium attitudes (TEA’s). These attitudes are
orientations where the spacecraft achieves the desired angular velocity (ω = ωf ) and all
external torques are balanced with gyroscopic torques. When there are no disturbances
acting on the satellite , this can occur in several scenarios. The first is when h = 0 and
ω = ωf . Because the angular momentum of the control devices is zero, this also implies that
u = 0 as well. When this is the case (5) becomes
ωf × Iωf = Tgg (12)
This scenario occurs when the principal axes of the spacecraft are aligned with the LVLH
frame. When this occurs, both terms in the above expression become zero. A second case
occurs when ωf = ω and u 6= 0. For this case we have the condition that (recall u = h)
h + ωf × h = −ωf × Iωf + Tgg (13)
If the principal axes are again aligned with LVLH, then the right-hand side of the expression
vanishes. The angular momentum of the control in the direction parallel to the orbit rate is
constant, and the angular momenta in the directions orthogonal to this rate oscillate at orbit
rate. Often the principal axes of a spacecraft may not be known exactly. If we attempt to
align the spacecraft body axis with the LVLH frame, then we obtain the following differential
6 of 22
equations
h1 − nh3 = −4n2I23
h2 = 3n2I13
h3 + nh1 = n2I12 (14)
If the inertia matrix is diagonal, then the spacecraft principal axes are aligned with the LVLH
axes. If this is the case h2 remains constant and h1 and h3 oscillate at orbit rate about zero.
When this is not the case, h2 grows linearly without bound and the other momenta oscillate
about a nonzero set point. This simple analysis shows us that even when no disturbance is
present, if we do not align the principal axes properly, it is possible for the control momentum
to grow without bound. For this reason, it is necessary to have a control that is able to find
a TEA even when disturbances are not present.
When constant disturbances are present in the system, the control momentum will con-
tinue to build even when the principal axes are aligned with the LVLH frame. More specifi-
cally, the control momentum in the direction parallel to the orbit rate will grow exponentially
without bound. This type of behavior can result in the destruction of the system under con-
trol. If the control input as well as the angular momentum of the control are both zero, the
equilibrium attitude satisfies the relationship
ωf × Iωf −Tgg −Td = 0 (15)
An equilibrium condition exists for this expression, if the magnitude of the disturbance
torque Td is not too large so that it overcomes the magnitudes of the two other terms. The
magnitude of the term ωf × Iωf is bounded because the satellite orbit rate is constant. The
gravity gradient torque magnitude depends upon the inertial properties of the satellite and
the its orientation. This term is bounded as well. If the disturbance torque exceeds these
bounds, then the satellite will tumble without use of the control. If this is the case, a passive
momentum management strategy is not possible and another means of dumping momentum
must be considered.
IV. Receding Horizon Control
A. Stability of RHC Methods
Early work on stability of RHC algorithms18,19 imposes a terminal boundary condition on
state as
x(t0 + T ) = 0
7 of 22
Since a nonlinear optimization problem with equality terminal constraint is computationally
demanding, in20 closed loop stability is ensured by imposing a terminal inequality constraint.
The constraint requires x(t0 + T ) to enter a suitable neighborhood of the origin. Once such
a neighborhood is reached, the receding horizon controller is switched to a locally stabilizing
linear controller. A different approach has been proposed in21 where the receding horizon
controller is obtained by solving a finite horizon problem with quadratic terminal state
penalty
Φ(x, t)∣∣∣t0+T
= axT Px∣∣∣t0+T
for some a, P > 0. In the more recent work of De Nicolao et al.22 stability of the receding
horizon controller is guaranteed by using a possible non quadratic terminal penalty, which is
the cost incurred if a locally stabilizing linear control law is applied at the end of the horizon.
The linear control law ensures exponential stability of the equilibrium point at the origin,
and it is assumed that the region of attraction of the linear controller is reachable within the
horizon length. This is different from20 in that the linear control law is never applied, it is
used just to compute the terminal cost. In another method proposed by Primbs et al.,15 first
a globally stabilizing control law is achieved by finding a global control Lyapunov function
(CLF). Stability of the receding horizon controller is ensured by including state constraints
that make the derivative of the CLF negative along the receding horizon trajectory and also
makes the value of the CLF at the end of the horizon lesser than that when the controller
obtained from CLF is applied. Unfortunately, the path and terminal constraint on the
optimization makes it computationally intensive.
As mentioned earlier, our implementation of receding horizon control for the momentum
management problem is based on the work by Jadbabaie, Yu and Hauser in.16 This method
is a hybrid of the methods proposed by Primbs et al. and De Nicolao et al. As in15 it is
assumed that a suitable CLF already exists. The CLF is used to replace the state inequality
constraint with a terminal cost. The terminal cost is the cost incurred if the controller from
the CLF is applied at the end of the horizon. The CLF can be obtained from the controller
in Vadali and Oh.5 In theory, as in,22 the CLF based controller is never implemented, it is
used only to compute the terminal cost.
B. Momentum Management in RHC
To solve the momentum management problem utilizing the RHC framework, we will need to
define a cost function. This cost function is composed of an integral portion and a terminal
portion. For the full system, the cost function is given by
J =
∫ t+T
t
(q(x) + uT u
)dτ + Vf (16)
8 of 22
The terminal portion (Vf ) is the CLF that guarantees stability and q(x) is a positive quantity
that serves as the trajectory cost associated with the state. The time T is the horizon length
(not the length of time that the controller will be implemented). For this implementation,
this will be given by the baseline controller or any Lyapunov function that satisfies
infu
∂V
∂xx ≤ −q(x) (17)
where q(x) is the portion of the integral cost corresponding to the state vector as defined
above. If the system is affine in the control and the partial derivative of the CLF with
respect to x does not contain any control terms, then it is trivial to show that for unbounded
u, the RHC methodology can stabilize the system. For example, if Vf = xT Qx, and x =
f(x) + g(x)u, where the system is controllable and g(x) is nonsingular, then
∂V
∂xx = xT Qf(x) + xT Qg(x)u (18)
and u must satisfy the constraint
xT Qg(x)u ≤ −q(x)− xT Qf(x) (19)
For Q and g(x) nonsingular, a value of u can always be selected to make this expression true.
The dynamics of the satellite can be put into this form for both the planar and full system
cases. Therefore, a choice of Vf in the above manner will guarantee stability. For the planar
case, we choose
q(x) = qθθ2 + qhh
2 (20)
Vf (x) = qV 1θ2 + qV 2h
2
Because the TEA location is unknown in planar examples, we do not include it in the cost
function or terminal cost as this would force the control to attempt to converge to an incorrect
TEA. The results for this case are generated in this manner. For the full system, the above
quantities are chosen as
q(x) = qθθT θ + qθθ
T Iθ + qhhTh (21)
V (x) = qV 1θT θ + qV 2θ
T Iθ + qV 3hTh
where θ = [θ1 θ2 θ3], θ is its derivative, and V is the terminal cost. For the RHC implemen-
tation, the control obtained from optimization of J is implemented for a time ∆ < T . For
the examples in this paper, ∆ is set to be about 16
of an orbit.
9 of 22
C. Implementation Details
A common practice for solving optimal control problems is to convert them into parameter
optimization problems. There are several methods available for conversion.23 Numerical
integration, collocation, direct transcription and differential inclusion are examples of these
conversion methods. Most of these techniques are similar in nature and differ by what is
guessed (state or control), how the integration is carried out (implicit or explicit), and the
order of integration. This process of conversion is called “suboptimal” control because search
is restricted to a particular subspace of the finite dimensional control space.
Figure 2. Transcription of OCP to NLP.
For our RHC implementation, we have used B-splines24 to parameterize the unknown
trajectories. The optimal control problem (OCP) is then transcribed into a nonlinear pro-
gramming problem (NLP) with respect to these parameters. Figure 2 illustrates the process
of transcribing optimal control problems to nonlinear programming problems. The tran-
scription is achieved in the following manner:
• The unknown trajectories are parameterized as B-Splines
• The optimization problem is then written in terms of the B-Spline coefficients αk.
• Costs and constraints are evaluated and enforced at the collocation points.
• The collocation points are different from the break points. In general, the collocation
points are more dense than the break points. The optimal set of collocation points can
10 of 22
be determined from the order of the spline using the Gaussian Quadrature formula.
The resulting NLP was solved using SNOPT ,25 a commercially available NLP solver. Numer-
ical optimization for the momentum management problem was done using a freely available
software called OPTRAGEN ,26 that the performs the transcription from OCP to NLP au-
tomatically.
V. Baseline Control Law
In this section, several of the control strategies presented by Vadali and Oh5 are presented.
This control implementation will be utilized as a comparison for the RHC implementation
developed in the previous section. Much of the detail in the development of these laws will
not be included, but for the sake of brevity the results will be highlighted. The control laws
utilize a dynamic potential27 of the form
Φ =3
2n2CT
1 IC1 − 1
2n2CT
3 IC3 − 1
2n2(3I1 − I2) (22)
This value is positive definite around stable spacecraft equilibria, but negative- semi-definite
around unstable configurations. This potential is used to build a Lyapunov function from
which a control can be derived. When the configuration is stable, the control law is fairly
simple and is given by
u = D (ω − ωf )− ω × h−Kh (23)
In the above expression, the matrices D and K = k1D are positive definite matrices (k1 is
a positive scalar). This control law does not depend on any system parameters and renders
the system asymptotically stable in terms of its angular velocity tracking and momentum.
The stability of the system is not guaranteed in the face of disturbances. If the spacecraft
is to be aligned in a configuration that is inherently unstable, then a different control law
must be utilized. The control law requires the definition of some additional terms. The first
is a composite state vector, z, of the form
z = (k + 1)I(ω − ωf ) + kh (24)
where k is a positive scalar parameter. The momentum management portion of the control
law is designed as a first order stable system with a state given by
x = h + G1
∫(ω − ωf ) dt + G2 (ω − ωf ) + K
∫h dt +
∫(ω × h) dt (25)
11 of 22
and state evolution
x = −K2x (26)
In the above expressions, the matrix G1 = 1k
(D + (k + 1)KI), G2 =(
1k(k + 1)
)I, and K2
is a positive definite matrix which governs the dynamics of x. With the definition of these
terms, the stabilizing control law can now be defined. This is given by
u = (k + 1) (ω × Iω + Tgg − Iωf ) + kG1 (ω − ωf ) + kKh + kK2x− ω × h (27)
This control law stabilizes the spacecraft when the configuration to be stabilized is inherently
unstable. Again it should be noted that these control laws, while stable when no disturbance
is present, may not achieve stability in the face of disturbances. Furthermore, there is no way
to guarantee stability when there are control or state bounds. The control law also requires
two different formulations based on the orientation. This means that if it was desired for
the spacecraft to maneuver from a stable equilibrium to an unstable equilibrium, an entirely
different control law would be required.
VI. Results
In this section the simulated results obtained from the application of the receding horizon
control will be highlighted. The results presented in this section will be divided into two
sections. The first section will deal with the results of the receding horizon controller when
applied to the planar system. This will highlight some of the difficulties in the control strategy
and also show improvements that result from a sub-optimal implementation. Following this
demonstration, the control is applied to the full system dynamics.
A. Planar System
The simulation results for the planar system will be conducted for a stable system in a
circular orbit. The system will be normalized and scaled in time such that each orbital
period is given by 2π time units. The first implementation to be considered will attempt to
determine a control u ∈ U , that minimizes the cost function with q(x) and Vf given by (20).
In this example, u will be modeled as a set of B-splines with no constraints nor assigned
structure. When the disturbance on the system is known perfectly, the following results are
obtained. Figure 3 shows the response of θ and θ when the system is under RHC control
and the disturbance is known perfectly. In these figures, the angle converges rapidly to the
steady state TEA. The rate converges to zero, meaning that the angular velocity of the body
converges to n. The momentum and control responses are shown in figure 4. The control
momentum used to maneuver the spacecraft to the TEA is removed within two orbits. The
12 of 22
0 1 2 3
−20
−10
0
10
20
30
orbits
θ (d
eg)
0 1 2 3−10
0
10
20
30
40
orbits
θ do
t (de
g/s)
Figure 3. Planar angle and angle rate response under RHC with known disturbance
control appears to contain some chatter, but this is in part due to the condensed time frame.
Keep in mind that the units of the figure each represent one entire orbit. These oscillations
will not be at high enough frequency to excite structural modes, but regardless this is an
important consideration. Figures 3 and 4 demonstrate the response of the planar system
when the disturbance is known perfectly. When this is the case, the optimization performed
by the RHC utilizes correct system information, and so the response of the actual system
when control utilizing the optimal law is very nearly identical to the expected response (with
some error coming from the spline approximations to the differential equations). When the
disturbance is not known exactly, the equality constraints enforced by the control law are
inaccurate and the control resulting from the optimization does not drive the system as
expected. Figure 5 shows the system response when the disturbance is unknown (for this
case the disturbance is 30% inaccurate). The unknown disturbance results in an oscillatory
behavior when no control structure is assumed. This occurs because of the error in the
acceleration constraint. The control input cannot keep the spacecraft at TEA because it is
calculated for an incorrect disturbance value. Between optimization steps, the disturbance
error causes the system to be driven from TEA. At each optimization step (i.e. at the end
of each horizon), the current condition is used to recalculate the control. The control drives
the system back towards TEA, but by the end of the horizon, the control effort decreases
allowing the system to be driven away from the TEA again. Thus a limit-cycle-like behavior
is created and the system attitude oscillates near the TEA. The momentum also exhibits this
same oscillation, although it is bounded. To remove the oscillatory behavior, an assumption
can be made about the structure of the control. Instead of searching for any control u ∈ U ,
13 of 22
0 1 2 3
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
orbits
h
0 1 2 3−4
−3
−2
−1
0
1
orbits
u
Figure 4. Control momentum and control response under RHC with known disturbance
the following structure is assumed.
u = kω(t)θ − kh(t)h (28)
The optimization problem is now changed to an optimization over kω(t), kh(t) ∈ R. Assigning
a control structure is sub-optimal because this choice is a subspace of U . The control gains
will be modeled as B-splines, which implies continuity in the gains over each horizon. The
following figures will demonstrate the response of the system under this type of control.
Three cases will be examined. For the first case, no restrictions will be placed on the values
of the control gains kω and kh. For the second case, these values will have lower and upper
bounds, and for the third case the gains will be constant for each horizon (still maintaining
the bounds). Figure 6 shows the results from each of the control laws. The data is labeled by
the case to which it corresponds. For this case, the actual disturbance is not known correctly.
From the response, it is easy to see that the limit-cycle-like behavior observed previously has
been alleviated. The simulated control gains for each of these cases are show in figure 7. The
gains show that “unstable” values of each gain are required as the responses approach the
optimal. This is particularly true for kh. As a comparison of each of the control responses,
figure 8 compares various costs associated with each controller. The left-hand portion of the
figure shows the cost as incremented by the integral cost portion of the cost function. As
expected this builds throughout the maneuver and settles down as the control law converges.
Also as expected, the control law with no constraints on the gains (case 1) has the lowest
cost of each of the control laws. This is because it is closer to the optimal solution. This
figure illustrates the tradeoff in some sense between the complexity of the controller and
14 of 22
0 1 2 3
−20
−15
−10
−5
0
5
10
15
20
orbits
θ (d
eg)
0 1 2 3
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
orbits
h
Figure 5. Angle and control momentum response of planar system under RHC with unknowndisturbances
the optimality of the solution. When the controller is implemented, decisions have to be
made based on this information. The right-hand side of figure 8 shows the cost accumulated
during each horizon (including terminal cost). The figures shows that as time progresses,
this decreases. This is a verification that the receding-horizon controller is doing its job.
15 of 22
0 1 2 3
−20
−15
−10
−5
0
5
10
15
20
orbits
θ (d
eg)
case 1case 2case 3
0 1 2 3−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
orbits
h
case 1case 2case 3
Figure 6. Angle and control momentum response of planar system under RHC with unknowndisturbances
0 1 2 3
−8
−6
−4
−2
0
2
4
6
orbits
k ω
case 1case 2case 3
0 1 2 3
−4
−2
0
2
4
6
orbits
k h
case 1case 2case 3
Figure 7. Control gains kω and kh for planar system under RHC with unknown disturbances
16 of 22
0 1 2 30
10
20
30
40
50
60
orbits
Cos
t
case 1case 2case 3
5 10 15 20
50
100
150
200
250
300
350
400
450
horizon increment
Cos
t of e
ach
segm
ent (
incl
udin
g te
rmin
al c
ost)
case 1case 2case 3
Figure 8. Cost accrued and RHC cost for cases 1–3 with unknown disturbances
17 of 22
B. Full System
The next task is to apply this control strategy to the full spacecraft dynamics. This con-
trol will be compared with the baseline nonlinear controller presented in section V. The
simulations are performed for a satellite in a circular orbit. In this simulation, the satellite
is required to operate in a configuration that is locally unstable. In other words, a per-
turbation from equilibrium would result in the uncontrolled system moving away from its
TEA and oscillating about another one. In this particular example, the principal inertias
of the spacecraft are such that I1 > I2 > I3. A stable TEA should have I3 > I2 > I1.
The dynamics have a constant disturbance acting on them that is assumed to be known (or
has the ability to be estimated). The satellite behavior is simulated such that the principal
axes are initially aligned with LVLH and the angular velocity of the satellite is orbit rate.
The initial control momentum is set to be zero as well. Figure 9 shows the attitude of the
spacecraft under receding horizon control as well as under the baseline control. Because the
equilibrium is unstable, the baseline control is that of equation (27). The receding horizon
control response is represented by the solid line and the dashed line represents the baseline
control response. The TEA attitude is represented by the dotted line. The figure shows the
0 1 2 3 4
−10
−5
0
5
10
15
20
25
orbits
θ (d
eg)
θ1
θ2
θ3
0 1 2 3 4−140
−120
−100
−80
−60
−40
−20
0
20
orbits
h
h1
h2
h3
Figure 9. Euler angle and control momentum responses for RHC and nonlinear baselinecontroller under constant disturbance
convergence of each law to its TEA. An important note about the RHC law is that we are
able to include constraints on the peak angular velocities, torques, etc, and we are still able
to maintain stability. Enforcing these limits on the baseline controller requires gain tuning.
For example, limiting the peak angular velocity of the response of the baseline controller
without a-priori knowledge of the maneuver requires limiting the torque. It is difficult to
determine proper torque limits which will result in the desired limits on angular velocity.
18 of 22
For RHC these are included as part of the on-line optimization. This is computationally
feasible, because the optimization is only performed over a finite time horizon. The figure
also shows that momentum accrued in the maneuver is dumped within 2 orbits as well. For
the baseline controller, the momentum is bounded, but does not decay to zero. This ability
to ensure that the momentum decays to zero as part of the formulation is an additional
advantage of the optimization in the RHC methodology. Figure 10 shows the responses of
0 1 2 3 4
0
2
4
6
8
10
x 10−3
orbits
θ 1 dot
(de
g/s)
RHCBLC
0 1 2 3 4
−0.03
−0.025
−0.02
−0.015
−0.01
−0.005
0
0.005
orbits
θ 2 dot
(de
g/s)
RHCBLC
0 1 2 3 4
−0.01
0
0.01
0.02
0.03
0.04
0.05
orbits
θ 3 dot
(de
g/s)
RHCBLC
Figure 10. Euler angle rate responses for full system subject to constant disturbance
the Euler Angle rates for the simulation. The system response for the RHC is not as smooth
as that of the baseline controller, but if specifications require, the rates can be limited in
the optimization step. Finally, figure 11 depicts the control responses for each controller. As
0 1 2 3 4
−0.1
−0.05
0
0.05
0.1
orbits
u 1
RHCBLC
0 1 2 3 4
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
orbits
u 2
RHCBLC
0 1 2 3 4
−0.14
−0.12
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
orbits
u 3
RHCBLC
Figure 11. Control Torque responses for full system subject to constant disturbance
expected, the RHC law exhibits higher frequency behavior, however the scale on the graph
can be misleading. The responses exhibit frequency on the order of 3 cycles per orbit. For
a value of n = 0.001, this means that the frequency is 5 × 10−4Hz. If required, the opti-
mization framework allows for the slope of the torque to be limited as well. Finally, we can
compare the optimality of the receding horizon control law to that of the baseline controller.
Figure 12 shows the cost accrued by each control law in terms of the integral cost function.
The control momentum portion of the cost function is not included in this figure because
19 of 22
0 1 2 3 4 50
10
20
30
40
50
60
orbits
Per
form
ance
Inde
xRHCBLC
Figure 12. Accrued integral cost of each control law (momentum excluded)
for the baseline control, the cost would grow without bound. As expected, the RHC law
outperforms the baseline control law. The baseline control has over twice the cost of the
receding horizon control law.
These figures demonstrate that the RHC methodology is well suited for this type of
application when disturbance information is available. When no disturbance information is
present, it is possible to assume a control structure as was done for the planar case. The
difficulty with this strategy is that it requires the parameterization of a large number of
variables. If a state feedback form is to be utilized, the resulting gain matrix will have 27
elements (9 states and 3 input torques). This is an extremely large number of elements
to parameterize. If the pitch and yaw/roll axes are decoupled, this number reduces to 15
elements, which is still a large computational burden. It is also possible to estimate the
disturbance and couple this with the control law, but this approach will not be covered in
this work.
VII. Conclusion
This work has presented the application of the receding horizon control framework to
the satellite attitude and momentum management control problem for a spacecraft in a
circular orbit under the influence of constant disturbances. The control methodology was
applied to a simplified planar version of the problem as well as to the full 3-axis problem.
The problem of uncertain disturbance magnitude was considered for the planar problem. It
was demonstrated that when disturbances are unknown, a traditional RHC formulation may
exhibit oscillatory behavior about the equilibrium point. This was alleviated by assigning
20 of 22
a control structure to the RHC law. While this is necessarily sub-optimal, it removes the
oscillations about the equilibrium and allows the system to seek the TEA. For the full system,
the receding horizon controller is compared to a baseline nonlinear controller developed
through a Lyapunov framework. The control law demonstrates the ability to seek the TEA
and regulate the control momentum asymptotically to zero. This is also accomplished in the
face of various constraints on the performance such as a maximum Euler angle rate.
References
1Wie, B., Byun, K., Warren, V. W., Geller, D., Long, D., and Sunkel, J., “New Approach to Atti-tude/Momentum Control for the Space Station,” Journal of Guidance, Control, and Dynamics, Vol. 12,No. 5, 1989, pp. 714–722.
2Warren, W., Wie, B., and Geller, D., “Periodic-Disturbance Accommodating Control of the SpaceStation for Asymptotice Momentum Management,” Journal of Guidance, Control, and Dynamics, Vol. 13,No. 6, 1990, pp. 984–992.
3Harduvel, J. T., “Continuous Momentum Management of Earth-Oriented Spacecraft,” Journal ofGuidance, Control, and Dynamics, Vol. 15, No. 6, 1992, pp. 1417–1426.
4Sunkel, J. W. and Shieh, L., “An Optimal Momentum Management Controller for the Space Station,”Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit , Boston, MA, August1989, pp. 402–411.
5Vadali, S. R. and Oh, H., “Space Station Attitude Control and Momentum Management: A NonlinearLook,” Journal of Guidance, Control, and Dynamics, Vol. 15, No. 3, 1992, pp. 577–586.
6Singh, S. N. and Bossart, T. C., “Feedback Linearization and Nonlinear Ultimate Boundness Controlof the Space Station Using CMG,” Proceedings of the AIAA Guidance, Navigation, and Control Conferenceand Exhibit , Portland, OR, August 1990, pp. 369–376.
7Paynter, S. J. and Bishop, R. H., “Adaptive Nonlinear Attitude Control and Momentum Managementof Spacecraft,” Journal of Guidance, Control, and Dynamics, Vol. 20, No. 5, 1997, pp. 1025–1032.
8Choi, M.-T. and Flashner, H., “Neural-Network-Based Spacecraft Attitude Control and MomentumManagement,” Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit , Denver,CO, August 2000.
9Qin, S. and Badgwell, T., “An Overview of Industrial Model Predictive Control Technology,” AIChESymposium Series, Vol. 93, 1996, pp. 232–256.
10Bemporad, A. and Morari, M., “Robust Model Predictive Control: A Survey,” Tech. rep., Auto-matic Control Laboratory, Swiss Federal Institute of Technology (ETH), Physikstrasse 3, CH-8092 Zurich,Switzerland, www.control.ethz.ch, 1999.
11Kwon, W., “Advances in Predictive Control: Theory and Application,” 1st Asian Control Conference,Tokyo, 1994.
12Bitmead, R., Gevers, M., and Wertz, V., Adaptive Optimal Control: The Thinking Man’s GPC ,International Series in Systems and Control Engineering, Prentice Hall, 1990.
13Soeterboek, R., Predictive Control: A Unified Approach, International Series in Systems and ControlEngineering, Prentice Hall, 1992.
21 of 22
14Rodellar, J. and Martın Sanchez, J., Adaptive Predictive Control , International Series in Systems andControl Engineering, Prentice Hall, 1996.
15Primbs, J., Nonlinear Optimal Control: A Receding Horizon Approach, Ph.D. thesis, California Insti-tute of Technology, 1999.
16Jadbabaie, A., Yu, J., and Hauser, J., “Stabilizing Receding Horizon Control of Nonlinear Systems: AControl Lyapunov Function Approach,” American Control Conference, Vol. 3, 1999, pp. 1535–1539.
17Mayne, D., Rawlings, J., Rao, C., and Scokaert, P., “Constrained Model Predictive Control, Stabilityand Optimality,” Automatica, Vol. 36, 2000, pp. 789–814.
18Keerthi, S. and Gilbert, E., “Optimal Infinite-horizon Feedback Laws for a General Class of Con-strained Discrete-time Systems,” J. Optim. Theory Appl., Vol. 57, 1988, pp. 265–293.
19Mayne, D. and Michalska, H., “Receding Horizon Control of Nonlinear Systems,” IEEE Transactionson Automatic Control , Vol. 35, 1990, pp. 814–824.
20Mayne, D. and H.Michalska, “Robust Receding Horizon Control of Constrained Nonlinear Systems,”IEEE Transactions on Automatic Control , Vol. 38, 1990, pp. 1623–1633.
21Parisini, T. and Zoppoli, R., “A Receding-horizon Regulator for Nonlinear Systems and a NeuralApproximation,” Automatica, Vol. 31, 1995, pp. 1443–1451.
22De Nicolao, G., Magni, L., and Scattolini, R., “Stabilizing Receding-horizon Control of NonlinearTime-varying Systems,” IEEE Transactions on Automatic Control , Vol. 43, No. 7, 1998, pp. 1030–1036.
23Hull, D., “Conversion of Optimal Control Problems into Parameter Optimization Problems,” Journalof Guidance, Control, and Dynamics, Vol. 20, 1997, pp. 57–60.
24de Boor, C., A Practical Guide to Splines, Springer-Verlag, New York, 1978.25Gill, P. E., Murray, W., and Saunders, M. A., “SNOPT: An SQP Algorithm for Large-Scale Con-
strained Optimization,” j-SIAM-J-OPT , Vol. 12, No. 4, 2002, pp. 979–1006.26Bhattacharya, R., “OPTRAGEN: A MATLAB Toolbox for Optimal Trajectory Generation,” IEEE
Conference on Decision and Control, San Diego, 2006.27Hughes, P. C., Spacecraft Attitude Dynamics, John Wiley and Sons, New York, NY, 1986.
22 of 22