Dwell Scheduling Algorithms
for Phased Array Antenna
ZORAN SPASOJEVIC
SCOT DEDEO
REED JENSEN
Lincoln Laboratory
In a multifunctional radar performing searching and tracking
operations, the maximum number of targets that can be managed
is an important measure of performance. One way a radar
can maximize tracking performance is to optimize its dwell
scheduling. The problem of designing efficient dwell scheduling
algorithms for various tracking and searching scenarios with
respect to various objective functions has been considered many
times in the past and many solutions have been proposed. We
consider the dwell scheduling problem for two different scenarios
where the only objective is to maximize the number of dwells
scheduled during a scheduling period. We formulate the problem
as a distributed and a nondistributed bin packing problem
and present optimal solutions using an integer programming
formulation. Obtaining an optimal solution gives the limit of
radar performance. We also present a more computationally
friendly but less optimal solution using a greedy approach.
Manuscript received February 23, 2010; revised August 12, 2010;
released for publication October 27, 2011.
IEEE Log No. T-AES/49/1/944339.
Refereeing of this contribution was handled by Y. Abramovich.
This work was sponsored by the Department of the Air Force under
Contract FA8721-05-C-0002.
Opinions, interpretations, conclusions and recommendations are
those of the authors and are not necessarily endorsed by the United
States Government.
Authors’ address: Lincoln Laboratory, Massachusetts Institute of
Technology, 244 Wood Street, Lexington, MA 02420-9108, E-mail:
0018-9251/13/$26.00 c° 2013 IEEE
I. INTRODUCTION
The rapid maturation of phased array antenna
technology as a norm of advanced radar brings about
a new set of scheduling challenges. With essentially
instantaneous slewing time from one position to
another, a phased array antenna is ideally suited for
operational use in a multi-target tracking environment.
It is tasked with providing periodic updates of
positions of targets being tracked for the purpose of
keeping track covariances within bounds. As a part
of radar resource management, one is challenged
with producing optimal or nearly optimal scheduling
algorithms that will take advantage of phased array
antenna properties and satisfy the demands of a
tracking system. Figure 1 gives a basic flow chart of
an antenna scheduler.
Fig. 1. Phased array antenna scheduler.
The engagement controller has a finite list of
tasks/waveforms it can select from to create a sublist
to send to the dwell scheduler. The list is selected
according to the rules of engagement and task
priorities. It may include waveforms designed for
searching, tracking, periodic calibration, special
modes, and so on. Each waveform is characterized
by the number of pulses, pulse intensity, and pulse
duration. The inner workings of the engagement
controller are beyond the scope of this paper. Some
aspects of it are considered in [1], [2]. Our focus
is on the dwell scheduler. Waveform characteristics
that are of concern to the dwell scheduler are their
priorities, pulse number, and pulse duration. Once
the list of tasks is communicated to the dwell
scheduler its function is to optimally allocate time for
execution of those tasks/dwells during a preassigned
scheduling interval. A dwell schedule is then sent
to the waveform generator which selects specific
waveforms for the antenna to transmit. The reflected
pulses are received by the antenna and sent to the
signal processor which extracts the returned echo
from targets. They are further processed by the
data processor and sent to the tracker. This process
continues until the end of a tracking scenario.
Since the inception of phased array radar, dwell
scheduling has been recognized as an important factor
42 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
of the overall design. Some of the earliest dwell
scheduling algorithms are described in [1]. The two
typical classes of algorithms are template based and
adaptive [3, 4], both of which frequently use greedy
scheduling strategies. Conflicts are resolved according
to the task priorities. Both classes of these algorithms
have significant deficiencies (described in [1]). Due
to the fact that templates are designed offline, the
template-based algorithms are unable to adapt to the
dynamically changing working environment of the
radar. Although a real-time template-based scheduling
algorithm is discussed in [5], the horizontal parameter
of that algorithm is difficult to determine. Of the two
classes of scheduling algorithms, adaptive algorithms
are considered more effective. Since each dwell
consists of a sequence of pulses, a pulse interleaving
algorithm was proposed [6] in order to increase the
number of tasks that can be handled by the system.
Another pulse interleaving technique is proposed in
[7] with the consideration of the energy constraints in
the attempt to make the interleaving technique of [6]
more applicable. A dwell scheduling strategy design
is formulated as a nonlinear programming problem
and a genetic algorithm is used to obtain a solution.
More dwell scheduling algorithms based on temporal
reasoning and dynamic programming are proposed
in [8].
We elaborate now on the pulse interleaving
algorithms in [6] and [7]. A dwell consists of a
fixed number of coherent pulses. In a standard target
tracking mode of operation, the antenna sends a
pulse toward a target and then switches to a receive
mode in order to detect a reflected pulse. When
a reflected pulse is received another pulse is sent
toward the same target. The cycle continues until
the last reflected pulse is received. Then another
sequence of pulses is transmitted toward another
target in the same manner. The main idea behind the
pulse interleaving algorithms in [6] and [7] is the
following: while the radar is waiting to receive the
reflected pulse from a target, it can actually send a
pulse toward another target and only switch into a
receive mode when a return from the first target is
expected. This can be done assuming some a priori
knowledge of target range, which is not unreasonable.
In this way dwells for multiple targets can be executed
simultaneously, and in turn the number of targets that
a system can handle may increase by a significant
amount. Algorithms in [6] and [7] show how to
resolve the conflict between simultaneous transmits
and receives of pulses for multiple targets. In other
words, the algorithms propose a way to select targets
and transmit time so that such conflicts do not occur.
This interleaving strategy may be difficult to
realize in clutter environments. Clutter returns from
the direction of target B may be detected during thereceive time of pulse returns from target A, and thusadversely effect target detection of target A.
Our proposal to alleviate this difficulty is to
interleave pulses sent toward the same target instead
of considering other targets. In this case, all the clutter
returns would be from pulses sent toward one target
and a signal processor can be designed to handle such
cases. An added benefit of this is that we can actually
compute the optimal pulse repetition interval (PRI),
while the algorithms in [6] and [7] only provide
suboptimal solutions. This provides a fundamental
improvement over previous approaches. This also
significantly shortens the durations of dwells so that
more dwells can be executed in the same scheduling
interval and, hence, enable the system to handle a
larger number of targets. Derivation of an optimal
pulse repetition interval is presented in Section II.
Once the optimal dwell length for an antenna
tracking mode is determined, the next task is to
formulate an optimal dwell scheduling algorithm. In
this paper we present optimal and computationally
friendly, suboptimal solutions for two different
tracking system requirements. One of the main
requirements of a tracking system is to maintain tracks
of observed objects and to keep state covariances
within specified bounds. This is accomplished by
requiring frequent updates of the objects’ positions.
Such updates are commonly conducted in a periodic
or close to periodic fashion. These update periods
are determined by the engagement controller. Dwell
length and hence the number of pulses in a dwell
depend on many factors such as an object’s distance
from the sensor, object size, and desired tracking
precision. In many systems, little or no distinction
is made between a request for an execution of a
task and the actual task execution. Therefore, task
execution begins with the transmission of the first
pulse of a sequence of pulses transmitted towards an
object. It ends with the reception of the last reflected
pulse from the same sequence. We make a clear
distinction between a request for an execution of a
task and actual execution of a task. In addition, it is
a common practice to assume that a dwell repetition
interval is the same for all targets, as it was assumed
in many algorithms mentioned in the references. This
need not be the case for multitarget scenarios with
many different target types. Slow moving targets, less
important targets, or targets moving with a constant
velocity, for example, need less frequent revisits than
fast or manoeuvring targets. In this paper we assume
that the requests to observe targets are periodic with
periods equal to integer multiple of each other, but
we make distinction between the notions of task
request and task execution. Under our paradigm, task
executions need not be periodic. More precisely, if
p1 and p2 are request periods of two different tasks,then either p1=p2 2 N or p2=p1 2N, where N is theset of natural numbers. Task execution must begin at
or after its request and must end at or before the next
request. This necessarily implies that task executions
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 43
Fig. 2. Antenna operation scenario.
need not occur at periodic intervals, but periodicity
can be enforced if desired. It will become apparent
by the end of this paper that the nonperiodicity
of task executions will enable us to obtain more
efficient schedules with little or no influence on the
performance of a tracking system. We also require that
the duration of any task is shorter than the shortest
request period for tasks under consideration.
Using these assumptions, we formulate distributed
and nondistributed scheduling algorithms for two
different scheduling scenarios. We refer to these
scenarios as priority full and best fill. The meaning
of the terminology is explained in their respective
sections. In both cases we formulate the problem as
a bin packing problem and use integer programming
techniques to obtain optimal solutions. Due to the
very high computational complexity of the algorithms
used to solve integer programs, we are able to obtain
optimal solutions for scenarios with only a very small
number of targets. However, these solutions provide
the limits of antenna performance. For practical
applications dealing with a large number of targets,
we formulate greedy type algorithms. These greedy
algorithms are able to compute feasible solutions in
fractions of a second for scenarios with thousands of
targets. We did not have an opportunity to compare
our algorithms with the ones listed in the references.
However, we hope that our algorithms represent an
improvement to the current state of the art.
This paper is organized as follows. In Section II
we derive an expression for an optimal PRI for
one target. We introduce a scheduling scenario
based on priorities in Section III and consider
optimal and computationally friendly suboptimal
nondistributed and distributed dwell scheduling
algorithms. Section IV is analogous to Section III but
this time our desire is to optimally use the phased
array antenna. Scheduling performance examples
are presented in Section V including the influence
of semi-periodic task executions on a tracking
covariance matrix. Sections VI and VII deal with the
implementation and benchmarks of the algorithms
in the C++ programming language. Section VIII
concludes the paper.
II. OPTIMAL PULSE REPETITION INTERVAL
Consider a target of known trajectory over some
period of time. The desire is to send periodic pulses of
duration ls toward the target (see Fig. 2).
Fig. 3. Time duration of antenna modes.
Since the target trajectory is reasonably well
known, using track covariance one can compute the
duration of time lr over which a reflected pulse can bereceived completely at any point in the trajectory. In
general lr ¸ ls (see Fig. 3).Consider sending a single pulse toward a target.
Let l be the duration of time from the end of the
transmission of that pulse to the start of the reception
of the reflected pulse. Let d be the duration of timefrom the start of the transmission of a pulse to the
end of the reception of the reflected pulse. Figure 3
describes the set up.
The goal is to produce an optimal PRI given the
above constraints and the fact that there is some
minimal PRI p0, which exists due to the hardware andprocessing constraints. We also take into account the
fact that an antenna cannot send and receive pulses
at the same time. Note that n1 = bl=(ls+ lr)c is thenumber of whole segments of length ls+ lr that canfit inside the interval l. There are n1 +1 segments oflength ls+ lr inside the interval d since d¡ l = ls+ lr.Then the optimal PRI without taking into account
p0 is p1 = d=(1+ n1). Note that n1 can be replacedby any 0· n· n1 if we want a longer PRI. Thisobservation is needed to obtain an optimal PRI which
is greater than or equal to p0.To that end choose n¸ 0 such that d=(1+ n)
¸min(d,p0). Note that n= 0 if d · p0 and theargument below still holds. Solving for n we getn·max(0,(d=p0)¡1). The largest such nonnegativen is then n0 = max(0,b(d=p0)¡ 1c). Therefore the newoptimal PRI that is bigger than or equal to p0 is
p=max(d,p0)
1+min(n1,n0)
=max(d,p0)
1+min
μ¹l
ls+ lr
º,max
μ0,
¹d
p0¡ 1º¶¶ :
Alternatively, pulses directed toward different
targets can be interleaved to form aggregate dwells.
At the time of writing this article no optimal pulse
interleaving algorithm has been produced for forming
aggregate dwells as in [6], [7]. It would be the scope
of another study to show whether interleaving pulses
for a single target or interleaving pulses for multiple
targets is a more effective scheduling strategy. Neither
interleaving algorithm considers the full influence
of clutter on the length of dwells. The negative
influence of nearby clutter may be handled through
an appropriate choice of p0 in the algorithm in this
section.
44 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
The main scope of this paper is dwell scheduling
algorithms whether dwells are formed by interleaving
pulses for a single target or for multiple targets.
III. SCHEDULING BASED ON PRIORITIES
We now establish the notation and the framework
for the two scheduling scenarios we are going to
consider. Let fpi : 1· i·mg be an enumeration oftask request periods with i < j! pi < pj and pj=pi2N. For each i·m let fdij : 1· j · nig enumeratetasks (task durations) with request period pi. For ourdiscussion pi and dij are positive real numbers. Thevalues of dij depend, as pointed out earlier, on thedistance of objects being observed from the antenna
and can change with time as objects move. Often
in practice pi are usually taken to have the samevalue. We weaken this requirement to pj=pi 2Nfor pi · pj . The scheduling algorithms presentedbelow assume that all dij remain constant for a timeperiod. When their values change sufficiently, the
algorithms are applied again to the new values to
produce another schedule. The algorithms below
will produce schedules on the time interval [0,pm].However, in general we may wish to track objects
for longer time periods than pm. Thus, the scheduleobtained for the time period [0,pm] is repeated untilthe complete schedule is generated. We may also need
to change the values of pi as objects move to satisfytracking requirements, at which point we are forced
again to produce a new schedule. The new values
must still satisfy the requirement that pj=pi 2 N forpi · pj .We view scheduling as a bin packing problem.
Divide the closed interval [0,pm] on the realline into (pm=p1)-many consecutive subintervalsf²l = [lp1, (l+1)p1] : 0· l < pm=p1g of length p1.For convenience, intervals or bins of the form
[lp1, (l+1)p1] are called elementary intervals orelementary bins. Corresponding tasks fd1j : 1·j · n1g are called elementary tasks. We considerscheduling scenarios where certain tasks can be
scheduled across boundaries of consecutive
elementary intervals. Such tasks are called compound
tasks. We also find it useful to define compound
intervals as f·l = [(l¡ 1)p1, lp1][ [lp1, (l+1)p1] :1· l · (pm=p1)¡ 1g. Let ·l(1) = [(l¡ 1)p1, lp1]and ·l(2) = [lp1, (l+1)p1] denote the first andsecond elementary intervals in ·l, respectively.Based on the enumerations above, p1 is the
shortest request period and pm is the longest for thetasks that are considered. The goal is to optimally
pack segments fdij : 1· i·m,1· j · nig in binsf[lp1, (l+1)p1] : 0· l < pm=p1g. The packing rules aredetermined by the scheduling scenarios. One scenario
might assign priority to objects, where objects of
higher priority are scheduled fully (i.e., schedule
objects in all of their request periods during [0,pm]or not at all) before objects of lower priorities are
considered. Another scenario might require that the
radar use is optimized, or in other words, that the
idle time of the radar is minimized. Two additional
subproblems are obtained by requiring that in one
case bin boundaries are hard, and in the other case
that bin boundaries are soft. The exact meaning of
these terms is defined below as we consider each
problem in detail.
In this section we consider the scenario where
objects are given priorities and objects of higher
priority are scheduled before objects of lower priority
are considered. In addition, if a task dij cannot bescheduled fully during at least one of its request
periods within [0,pm], then it is not scheduled at all.We call this scenario the priority full scenario. These
constraints imply that scheduling tasks of a certain
priority is more important than scheduling all lower
priority tasks. This observation plays an essential role
in the design of objective functions in Sections III-C
and III-D when we consider optimal scheduling
algorithms that satisfy these constraints. In a tracking
scenario this case would correspond to track loss if an
observation is missed. Thus, our desire is a complete
maintenance of tracks and their covariance matrices
over a certain time period. We consider greedy and
global nondistributed and distributed algorithms.
A. Nondistributed Suboptimal Packing Algorithm
We describe a greedy algorithm where objects are
scheduled as they appear in the list based on their
priorities. We introduce two ordering relations on
tasks fdij : 1· i·m,1· j · nig:dij Á dkl, (dij > dkl)_ (dij = dkl ^pi < pk)
_ (dij = dkl ^ i= k^ j < l)dij ÁÁ dkl, (dij has higher priority than dkl)
_ (dij and dkl have the same priority^ dij Á dkl):
The symbol < is the usual less than ordering on realnumbers and the symbols ^ and _ denote logicalconjunction and disjunction, respectively. Our desire is
to schedule each dij during each of its (pm=pi)-manyrequest periods within [0,pm].The algorithm is defined by induction using ÁÁ.
Fix dij and suppose we have scheduled all dkl withdkl ÁÁ dij . There are (pm=pi)-many requests for dijwithin [0,pm]. For each u < (pm=pi) find the smallestv such that [vp1, (v+1)p1]μ [upi, (u+1)pi] and dijfits in the remaining space inside [vp1, (v+1)p1] notoccupied by already scheduled tasks. Schedule dijimmediately after all the other tasks already scheduled
inside [vp1, (v+1)p1] so that there are no gaps
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 45
between the executions of any such tasks and move
on to the next request period. The fact that a phased
array antenna can essentially point instantaneously
from one direction in space to another allows for such
scheduling. If no such v exists for a given u, then dijcannot be scheduled fully. In this case, remove all
previously scheduled instances of dij and move onto the ÁÁ-next task.This is the least efficient algorithm in this group
of algorithms as we will illustrate in Section V with
examples.
B. Nondistributed Optimal Packing Algorithm
Greedy algorithms sacrifice optimality for the sake
of computational efficiency. In order to guarantee
optimality we resort to integer programming.
We formulate our packing problem as an integer
programming problem with constraints (see [9] for
details on linear and integer programming). Because
of the computational complexity of the problem, we
are able to obtain optimal solutions in a reasonable
amount of processing time for only a few dozen
tasks. Yet, it is sufficient for some of the intended
applications.
We proceed with the formulation of the problem
as an integer program. A task dij is requested(pm=pi)-many times every pi seconds in the timeinterval [0,pm]. Since we are in the nondistributedcase, each dij must fit entirely inside one of theelementary subintervals of length p1 contained in[kpi, (k+1)pi]½ [0,pm]. The variables xijk 2 f0,1g, for1· k · pi=p1, indicate whether task dij is scheduledin [(k¡ 1)p1,kp1]. In this problem there are no
distributed tasks, i.e., all tasks are scheduled within
the boundaries of elementary intervals. The limit
on index k is pi=p1 because of the full schedulingconstraint. We formulate an objective function for the
integer program
f(dij)
=
8<:dij , if dij is ÁÁ -last,dij +
XdijÁÁduv
(pm=pu)f(duv), otherwise.
(1)
It is defined inductively starting from the task of
lowest priority. Its value at each new task is greater
than the cumulative value at all scheduled instances
(described by the factor pm=pu in the sum) of lowerpriority tasks. This will ensure that tasks of higher
priority are scheduled before all instances of all tasks
of lower priority. With this notation we can now
write the statement for the integer program for an
optimal scheduling solution. Bounds on indices are
not included whenever they are clear from context in
order to simplify notation.
Maximize:Xijk
f(dij)xijk (2)
Subject to:
8i8jÃX
k
xijk · 1!
(i)
8i0@ iXu=1
Xjk
(pi=pu)dujxujk · pi
1A (ii)
8i j k (xijk 2 f0,1g): (iii)
Condition (i) states that any task can be scheduled
at most once during its request period in any of
the elementary intervals contained in that period.
Condition (ii) imposes the requirement that cumulative
duration of all tasks scheduled in one of the request
periods cannot exceed the duration of that request
period. Because of the full scheduling constraint, the
factor (pi=pu) ensures that tasks having shorter requestperiods are represented during all of their request
periods inside [0,pi]. Finally, (iii) implies that taskexecutions cannot be interrupted by other tasks. It also
implies that elementary tasks can only be scheduled
once in elementary intervals. This assertion is also
included in (i).
Condition (i) implies that there are only finitely
many vectors ~x that satisfy the constraints (i)—(iii).Since x111 = 1 and xijk = 0 for all other (i,j,k) is anontrivial solution of the above system, it follows
that an optimal nontrivial solution exists for the above
integer program, although it may not be unique. We
obtain solutions using the package GLPK [10] (Gnu
Linear Programming Kit) which utilizes the Simplex
in conjunction with a branch-and-bound method
for obtaining integer solutions. From the theory of
integer programming one can guarantee that these two
methods in fact produce an optimal integer solution.
A solution of the above system does not indicate
the order in which tasks should be scheduled. It only
indicates which task should be scheduled in which
elementary intervals. One way to obtain a schedule
from a solution ~x is to execute tasks assigned to eachelementary interval by ~x according to the orderingrelation ÁÁ. We defer the analysis of this algorithmuntil Section V where we show examples of its
performance and compare it with the other algorithms
in this paper.
C. Distributed Suboptimal Packing Algorithm
This is a greedy algorithm that uses the relation
ÁÁ to schedule tasks. Fix dij and suppose we havescheduled all instances of dkl with dkl ÁÁ dij in sucha way that all tasks are scheduled in the beginning of
each elementary interval and any unoccupied space
remains only at the end of each such interval. This is
possible because of our assumption that phased array
46 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
antennas can point instantaneously from one direction
in space to another. There are (pm=pi)-many requestsfor dij . For each u < (pm=pi), find the smallest v suchthat
1) [vp1, (v+1)p1]μ [upi, (u+1)pi] and dij fitsin the remaining space inside [vp1, (v+1)p1] notoccupied by tasks dkl ÁÁ dij already scheduled inside[vp1, (v+1)p1]or
2) [vp1, (v+1)p1], [(v+1)p1, (v+2)p1]μ [upi, (u+1)pi] and the sum of the lengths of unoccupied spaces
at the end of [vp1, (v+1)p1] and [(v+1)p1, (v+2)p1]is bigger than or equal to dij .If case 1 is satisfied, then we simply schedule
task dij at the end of all previously scheduled tasksin [vp1, (v+1)p1]. For case 2, we schedule dij atthe end of all tasks scheduled in [vp1, (v+1)p1].Then reschedule all tasks already scheduled in
[(v+1)p1, (v+2)p1] by delaying their execution bythe amount of time that dij extends into [(v+1)p1,(v+2)p1]. If neither 1 nor 2 are satisfied, then taskdij cannot be scheduled fully. Therefore, we deleteall previously scheduled instances of dij , returnthe schedule to the state it was in before the first
scheduling attempt of dij was made and move on tothe ÁÁ-next task.We point out that case 2 inevitably leads to
nonconstant times between the consecutive executions
of a task. Examples of this scheduling scenario are
shown in Section V.
D. Distributed Optimal Packing Algorithm
In order to achieve optimality, we resort to
integer programming again. The ranking of tasks
by priority is incorporated in the objective function
as in Section III-B. Because of the full scheduling
requirement, we consider scheduling instances of tasks
only during their first request periods. For i > 1, thereare ((pi=p1)¡1)-many compound intervals containedin [0,pi]. Let xijk 2 f0,1g indicate whether or not aninstance of dij is scheduled inside the kth compoundinterval in [0,pi]. With this notation, we define anobjective function
f(dij)
=
½dij , if dij is ÁÁ -last,dij +
PdijÁÁduv (pm=pu)f(duv), otherwise.
(3)
Note that the objective function has the same form
as the one in Section III-B, but the constraints of
the integer program associated with this scheduling
scenario are different from the ones in formula 2. We
introduce two functions on natural numbers:
¾(i,j) = maxfk : k ¢ i· jgμ(i,j) = minfk : k ¢ i¸ jg:
We can now state the constraints. Bounds on
indices are not included whenever they are clear from
context in order to simplify notation.
Maximize:Xijk
f(dij)xijk (4)
Subject to:
8i > 1Ã8jÃ(pi=p1)¡1Xk=1
xijk · 1!!
(i)
8l8n
0@ [0· l < (pm=p1)^ 1· n· (pm=p1)^ 1· l+ n· (pm=p1)]
!
24n ¢Xj
d1jx1j1 +Xi j
0@ (pi=p1)¡1Xk=l+1¡(μ(pi=p1,l)¡1)¢(pi=p1)
dijxijk
+(¾(pi=p1, l+ n)¡ μ(pi=p1, l)) ¢ (p1=pi) ¢Xk
dijxijk
+
l+n¡1¡¾(pi=p1,l+n)¢(pi=p1)Xk=1
dijxijk
1A· np1
351A (ii)
8i8j8k (xijk 2 f0,1g): (iii)
We describe the meaning of each of the constraints.
Condition (i) states that each compound task can be
scheduled at most once during its request period. The
same statement for elementary tasks is implied by
condition (iii) since k = 1 for elementary tasks.Condition (ii) imposes constraints that prevent
overflow since consecutive compound intervals have
an elementary interval in common. Integer n indicatesthe number of consecutive elementary intervals being
considered and l indicates the starting point of thatsequence of intervals. Keeping in mind that there are
((pi=p1)¡ 1)-many compound intervals inside onerequest period of length pi, for each i, we determinehow many consecutive request periods of length
pi are contained inside the interval [lp1, (l+ n)p1].This is determined by the factor (¾(pi=p1, l+n)¡μ(pi=p1, l))(p1=pi). This number is then multipliedwith the cumulative duration of all tasks that have
request period pi. This describes the presence ofthe middle sum in this constraint. The first sum
collects all tasks corresponding to the request period
pi inside [lp1, (l+n)p1] that are requested before lp1and whose request period ends after lp1. The lastsum is analogous to the first, but is for tasks that are
requested before (l+ n)p1 and whose request periodends after (l+ n)p1. In other words, the first and thelast sum in the expression deal with tasks whose
request intervals partially intersect with [lp1, (l+ n)p1].Note that this case reduces to
Pj d1jx1j1 · p1 for l = 0
and n= 1. In other words, the sum of durations of all
elementary tasks scheduled in an elementary interval
cannot exceed the length of that interval.
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 47
Condition (iii) imposes integer solutions because
tasks are nonpreemptive and elementary tasks can be
scheduled only once during their request periods.
A solution of the above system does not indicate
the order in which tasks should be scheduled. It
only indicates which task should be scheduled in
which elementary and compound intervals. We
use the following inductive procedure to convert a
solution to the above integer program into a schedule.
For all tasks assigned by ~x to the same interval(elementary or compound) the intent is to schedule
tasks with shorter request periods before tasks with
longer request periods. This strategy tends to reduce
randomness and move task executions closer to
periodic executions.
First consider all elementary tasks designated by ~xfor scheduling. Let E1 = fd1j : x1j1 = 1g. We scheduleall tasks in E1 to appear in the beginning of ²1 inorder corresponding to the increasing value of index
j and without any gaps between tasks executions.To satisfy the full scheduling requirement, we repeat
the schedule in ²1 in all other elementary intervalscontained in [0,pm]. Since ~x satisfies condition (4.ii)for l = 0 and n= 1, it follows that
Pd2E1 d · p1 so
that no overflow can occur.
Compound tasks are scheduled by induction on
u, for 1· u < pm=p1. We are assuming here thatpm > p1. Otherwise, all tasks are elementary and wehave already considered that case. We also impose the
following three requirements on scheduling compound
tasks:
(®) Compound tasks that start in ²l begin theirexecution after all elementary tasks scheduled in ²lare executed.
(¯) All compound tasks designated by ~x forscheduling in ·l appear before all compound tasksdesignated by ~x for scheduling in ·l+1.(°) All compound tasks designated by ~x for
scheduling in ·l are scheduled in such a way thattasks with shorter request periods appear before tasks
with longer request periods.
With these three conditions in mind we now
show how to schedule compound tasks. For i > 1,let Eiu = fdij : xiju = 1g. Let ´u(w) be the length ofunoccupied space at the end of elementary interval
·u(w). We point out that the value of ´u(w) changes asnew tasks are scheduled.
Choose u and suppose that for all v < u and1< i < pm=p1 all tasks in Eiv have been scheduledand (®)—(°) are satisfied. We show how to scheduletasks in Eiu for 1< i < pm=p1. Choose an i andsuppose that for all w < i all tasks in Ewu have beenscheduled to satisfy (®)—(°). In addition, choose a jand suppose that all tasks diz 2 Eiu, for z < j have alsobeen scheduled to satisfy (®)—(°). We show how toschedule dij . We consider the following two cases.
1) ´u(1)¸ dij : The unoccupied space in ·u(1)is large enough so that dij can be scheduled entirelywithin ·u(1). Since the schedule to this point satisfies(®)—(°) and the full scheduling requirement forscheduling in ·u is in effect, all scheduled compoundtasks (if any) designated by ~x follow all elementarytasks scheduled in ·u(1) and all scheduled compoundtasks (if any) designated for scheduling in ·u¡1. Weschedule dij immediately after all elementary tasksscheduled in ·u(1), all compound tasks scheduled in·u¡1, and all compound tasks, whose request periodis · pi, designated for scheduling in ·u by ~x and thefull scheduling requirement. We delay the execution
by dij of all compound tasks scheduled in ·u(1) afterthe location where dij is inserted. Since ´u(1)> dij ,neither the execution of dij nor any tasks whoseexecution is delayed by dij will cross into ·u(2) andthe requirements (®)—(°) are satisfied by the newschedule with dij in it.2) ´u(1)< dij : We first note that this case
includes ´u(1) = 0, in which case the execution ofthe elementary tasks scheduled in ·u(2) may or maynot start at the beginning of ·u(2). Their executionmay have been delayed by the execution of some
compound tasks which started in ·u(1) and endedin ·u(2). We look in ·u(1) and ·u(2) to find thelocation for scheduling dij . Similar to the previouscase, we find the location after all scheduled tasks
intended for scheduling in ·u¡1 by ~x and the fullscheduling requirement, as well as the scheduled
tasks intended for scheduling in ·u by ~x and the fullscheduling requirement with request period · pi.If ´u(1)> 0, then this location is guaranteed to bein ·u(1). Otherwise, it may be in ·u(1) or in ·u(2).Insert dij and reschedule all tasks scheduled afterthat location in the following way. If dij was insertedin ·u(1), then we reschedule all compound tasks in·u(1) following that location in order as they appearedbefore dij was considered. Since ´u(1)· dij , at onepoint a task d will be encountered (perhaps even dij)whose execution starts properly in ·u(1) and ends ator after the beginning of ·u(2). In this case we delaythe execution of all elementary tasks scheduled in
·u(2) by the amount that d extends into ·u(2). Thecompound tasks in ·u(2) need to be rescheduledas well as all the remaining compound tasks from
·u(1) that followed d. They are all rescheduled inorder as they appeared in the schedule before dijwas considered and in accordance with one of the
two cases above so that (®)—(°) are satisfied. If thelocation to insert dij is in ·u(2), then we insert it thereand reschedule all the compound tasks scheduled
in ·u(2) following that location in accordance withone of the two cases above so that (®)—(°) remainsatisfied.This finishes the definition of the inductive step
for scheduling elements in sets Eiu in interval ·i. Tofulfill the full scheduling requirement, we repeat the
48 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
steps above to schedule elements in sets Eiu in all thecompound intervals ·k¤(pi=p1)+u for all positive integersk such that k ¤ (pi=p1)+ u· pm=p1¡ 1.IV. OPTIMAL ANTENNA USE
This section considers the problem when the goalis to minimize the radar’s idle time. We call it the best
fill scenario. Formulated as bin packing, the goal isto pack fdij : 1· i·m,1· j · nig in such a way thatminimizes empty space inside bins f[lp1, (l+1)p1] :l < pm=p1g. We recall that a tracking system requiresthat tasks dij are to execute (pm=pi)-times inside timeinterval [0,pm] with requests for execution made atlpi for l 2N and 0· l < (pm=pi). In order to achieveour goal of optimally packing tasks, we assume thatexecutions of some dij may have to be omitted, andthat this leads to less than catastrophic consequence
for the tracking system. In other words, for some i,j,dij may have to be executed less than (pm=pi)-timessuch that track covariances may degrade beyond thespecified limits but without any tracks actually beinglost.
Given that there are (pi=p1)-many elementary binsinside each interval [lpi, (l+1)pi] and that tasks arenonpreemptive as before, there are two ways in whichtasks dij can be packed inside bins [lpi, (l+1)pi]. Ifthe compound tasks cannot cross the boundaries of
the elementary intervals we call this nondistributedscheduling, otherwise it is distributed scheduling.
A. Nondistributed Suboptimal Packing Algorithm
This is a greedy algorithm based on ordering
tasks in some way and packing them in the orderthey appear in the list. An experienced traveler is
well aware that if he/she packs the largest items firstthen he/she can pack more items in fewer suitcaseswith less wasted space. Indeed, as shown in [11],
one can achieve better packing if objects are sorted inorder of nonincreasing size before applying a packing
algorithm. See [11] for detailed analysis of variouspacking algorithms.We recall the definition of the ordering relation Á
on the set fdij : 1· i·m,1· j · nig introduced inSection III-A,
dij Á dkl, (dij > dkl)_ (dij = dkl^pi < pk)_ (dij = dkl ^ i= k ^ j < l):
Our desire is to schedule each of the (pm=pi)-manydij during each of its request periods within [0,pm]. Ifwe are successful, the total amount of space occupied
by task dij inside [0,pm] is (pm=pi) ¢ dij .A greedy bin packing algorithm is defined by
induction. Fix dij and suppose we have scheduled alldkl with dkl Á dij . There are (pm=pi)-many requestsfor dij within [0,pm]. For each u < (pm=pi) find thesmallest v such that [vp1, (v+1)p1]μ [upi, (u+1)pi]and dij fits in the remaining space inside [vp1,
(v+1)p1] not occupied by already scheduled tasks.Schedule dij immediately after all the other tasksalready scheduled inside [vp1, (v+1)p1] so thatthere are no gaps between the executions of any
such tasks. The fact that a phased array antenna can
point essentially instantaneously from one direction
in space to another allows for such scheduling. If
no such v exists for a given u, then dij remainsunscheduled inside its request period spanning the
interval [upi, (u+1)pi]. This finishes an inductivedefinition of the nondistributed packing algorithm.
This is the least efficient packing algorithm of the
ones considered in this paper, as we illustrate with
examples in Section V. However, its computational
efficiency makes it a desirable algorithm to use in
practice. It can compute a schedule with thousands of
targets in a fraction of a second which is a desirable
property for use in many tracking systems (see
Sections VI and VII).
B. Nondistributed Optimal Packing Algorithm
We now consider an optimal packing algorithm
for the best fill scenario. As in Section III-B we
address the problem using integer programming. A
task dij is requested (pm=pi)-many times every piseconds during the time interval [0,pm]. Since we arein the nondistributed case, each dij must fit entirelyinside one of the elementary subintervals of length
p1 contained in [kpi, (k+1)pi]½ [0,pm]. We mustallow for a possibility that some requests are not
fulfilled due to the scheduling of less frequent but
longer tasks in order to satisfy our goal of minimizing
processor idle time. For that reason we consider each
scheduling of dij in [kpi, (k+1)pi] independent ofany scheduling to the same task in [lpi, (l+1)pi] fork 6= l. Let xijk 2 f0,1g indicate whether or not dij isscheduled in [kp1, (k+1)p1]. Using this notation weformulate corresponding integer program. We omit
bounds on indices whenever they are clear from the
context.
Maximize:Xijk
dijxijk (5)
Subject to:
8i8j (8k < (pm=pi))0@pi=p1X
l=1
xij(k(pi=p1)+l) · 11A (i)
8k0@X
ij
dijxijk · p1
1A (ii)
8i j k (xijk 2 f0,1g): (iii)
Condition (i) states that any task can be scheduled
at most once during its request period in any of
the elementary intervals contained in that period.
Condition (ii) imposes the requirement that cumulative
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 49
duration of all tasks scheduled in any elementary
interval cannot exceed the length of that interval.
Finally, (iii) implies that executions of tasks cannot
be interrupted by other tasks. It also implies that
elementary tasks can only be scheduled once in
elementary intervals. This assertion is also included
in (i).
A solution (not necessarily unique) of this
linear system minimizes processor idle time. As in
Section III-B, we obtain solutions using GLPK [10].
To obtain a schedule from a solution, we simply
execute tasks in each elementary interval according
to Á from the previous section. We illustrate the
performance of this algorithm with examples in
Section V.
C. Distributed Suboptimal Packing Algorithm
In the next two subsections, we allow the
possibility that tasks dij , for i > 1, can be scheduledacross boundaries of elementary intervals [vp1,(v+1)p1] and [(v+1)p1, (v+2)p1] if for some u 2 N[vp1, (v+1)p1], [(v+1)p1, (v+2)p1]½ [upi, (u+1)pi]:We use Á, as before, to order tasks and define adistributed greedy packing algorithm.
Fix dij and suppose we have scheduled all dkl withdkl Á dij in such a way that all tasks are scheduled inthe beginning of each elementary interval and any
unoccupied space remains only at the end of each
interval. This is possible because of our assumption
that phased array antennas can move instantaneously
from one pointing direction in space to another.
There are (pm=pi)-many requests for dij . For eachu < (pm=pi) find the smallest v such that1) [vp1, (v+1)p1]μ [upi, (u+1)pi] and dij fits
in the remaining space inside [vp1, (v+1)p1] notoccupied by tasks dkl Á dij already scheduled inside[vp1, (v+1)p1]or
2) [vp1, (v+1)p1], [(v+1)p1, (v+2)p1]μ[upi, (u+1)pi] and the sum of the lengths of
unoccupied spaces at the end of [vp1, (v+1)p1] and[(v+1)p1, (v+2)p1] is bigger than or equal to dij .If case 1 is satisfied, we simply schedule task
dij at the end of all previously scheduled tasksin [vp1, (v+1)p1]. For case 2, we schedule dij atthe end of all tasks scheduled in [vp1, (v+1)p1].We then reschedule all tasks already scheduled in
[(v+1)p1, (v+2)p1] by delaying their execution bythe amount of time that dij extends into [(v+1)p1,(v+2)p1]. If neither 1 nor 2 are satisfied, we simplyomit scheduling dij during this request period andmove on to the next request period that occupies the
time segment [(u+1)pi, (u+2)pi].We point out that case 2 inevitably leads to
nonconstant times between consecutive executions
of a task. We defer the performance analysis of this
algorithm until Section V where we show examples
and compare it with the other algorithms described in
this paper.
D. Distributed Optimal Packing Algorithm
In order to achieve optimality we must consider
all tasks together and forgo any scheduling that relies
on ordering of tasks. We again formulate the problem
as an integer program. Because task executions are
nonpreemptive, we require integer solutions and,
therefore, utilize a branch-and-bound method together
with the Simplex or an interior point method. For
the sake of optimality we accept the possibility that
some tasks may not be scheduled during their request
periods. Therefore, we treat different request periods
and different compound intervals independent of each
other. For i= 1, let xijl 2 f0,1g indicate whether ornot dij is scheduled in ²l. Similarly, for i > 1, let xijl 2f0,1g indicate whether or not dij is scheduled in ·l.Since tasks cannot be scheduled across boundaries of
their request period, we require that l mod (pi=p1) =0! xijl = 0 for i > 1. This notation is designed tosimplify statements in the constraints of the integer
program below. Values of xijl, for l mod (pi=p1) 6= 0,are determined by the solution of the integer program.
We also do not state explicitly bounds on i,j,k toreduce clutter in the formulas whenever they are clear
from context.
Maximize:Xijl
dijxijl (6)
Subject to:
8lÃX
j
d1jx1jl · p1
!(i)
8i > 18j8lÃl mod (pi=p1) = 0!
(pi=p1)¡1Xk=1
xij(l+k) · 1!
(ii)
8l8nÃ[0· l < (pm=p1)^ 1· n· (pm=p1)^ 1· l+ n· (pm=p1)]
!"X
j
nXk=1
d1jx1j(l+k) +Xi>1
Xj
n¡1Xk=1
dijxij(l+k) · np1
#!(iii)
8i > 18j8l (l mod (pi=p1) = 0! xijl = 0) (iv)
8i8j8l (xijl 2 f0,1g): (v)
The maximization part of the above statement
asserts that we want to use the antenna as much as
possible. We elaborate on the meaning of each of the
constraints.
Condition (i) states that cumulative duration of
elementary tasks scheduled during their request
periods cannot exceed the length of their request
periods.
50 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
Condition (ii) states that no distributed task can be
scheduled more than once during its request period.
This requirement is implicit in (v), for i= 1. Fori > 1, we divide each request period of task dij into((pi=p1)¡1)-many compound intervals and allow itto be scheduled in any one of these intervals at most
once during that request period.
We include (iii) to prevent overflow. Since two
consecutive compound intervals have an elementary
interval in common, we need to ensure that whatever
tasks are scheduled in consecutive pairs of compound
intervals, the sum of durations of such tasks does not
exceed 3p1. In fact, to completely prevent overflow,we need to impose similar constraints on all sequences
of consecutive compound intervals within [0,pm].Index n designates the length of such sequences ofcompound intervals and l is the index of the firstcompound interval in such a sequence.
Condition (iv) states that compound tasks cannot
be scheduled in compound intervals that overlap
boundaries of their request periods.
Finally, (v) imposes integer solutions because tasks
are nonpreemptive and also that elementary tasks
can be scheduled at most once during their request
periods. In addition it implies that there are only
finitely many choices for xijk which satisfy constraints(i)—(iv). Since x111 = 1 and xijk = 0 for all other (i,j,k)is a nontrivial solution of the above system, it follows
that an optimal nontrivial solution exists for the above
integer program, although it may not be unique. We
obtain an optimal integer solutions using the package
GLPK [10].
Because a solution of the above system does not
indicate the order in which tasks should be scheduled,
we must use the following inductive procedure to
convert a solution to the above integer program into
a schedule. Let ~x be a solution of the integer programabove. Suppose we have scheduled all tasks in ·kfor k < l according to ~x. Since ·l¡1(2) = ·l(1), allelementary tasks designated for ·l(1) are alreadyscheduled in ·l(1). They are followed by compoundtasks designated for ·l¡1 which did not fit in ·l¡1(1).Note that none of the compound tasks designated for
scheduling in ·l¡1 and scheduled in ·l¡1(2) = ·l(1)can cross the boundary between ·l(1) and ·l(2).So, if there is any empty space in ·l(1), schedulethe compound tasks designated for ·l in that spaceaccording to their indexing (the lexicographic ordering
of their subscripts). Either all such tasks will fit in the
remaining empty space in ·l(1) or there is the firsttask dij that will cross into ·l(2) when scheduled in ·l.If the first case occurs, then the only task remaining to
be scheduled in ·l are the elementary tasks designatedfor ·l(2). Scheduling those in ·l(2) finishes theinductive procedure for stage l. In the second casewhere dij crosses into ·l(2), continue scheduling theelementary tasks designated for ·l(2) after dij . Onceall such elementary tasks are scheduled, finish stage
TABLE I
Performance Example for Priority Full Scenario
Achieved Radar Algorithm
Algorithm Idle Time Run Time
Greedy Nondistributed 0.134 s 0.00102 s
Greedy Distributed 0.126 s 0.00521 s
Global Nondistributed 0.134 s 0.01141 s
Global Distributed 0.004 s 0.00882 s
l by scheduling the remaining compound tasks in·l(2) designated for ·l. This can be done withoutcrossing the boundary between ·l(2) and ·l+1(1)since ~x satisfies constrains (i)—(v). This finishes theinductive definition of a schedule from ~x.
V. SCHEDULING EXAMPLES
All algorithms in this paper were originally
implemented in MATLAB. We present in this section
an example that illustrates their performance. We
consider eight tasks with the following task durations
and corresponding request periods, with units in
seconds:
d = (0:0744,0:0693,0:0682,0:0661,0:0651,0:0566,
0:0453,0:0317)
p= (0:25,0:25,0:50,1:00,0:50,1:00,0:25,1:00):
Tasks are ordered by decreasing durations. We also
take this to be the ordering of their priorities, namely,
Á=ÁÁ. This way we can compare directly the resultsfor the two scheduling scenarios, priority full and best
fill, from Sections III and IV, respectively.
A. Priority Full Scenario
The goal in this section is to fully schedule
the maximal number of targets while preserving
their priorities. The measure of performance we
use is the radar idle time, which is the time when
radar is neither transmitting nor receiving actual
pulses even though it may be switched in a receive
or transmit mode. From the results in Table I, the
greedy distributed algorithm outperformed the global
nondistributed algorithm. The overall winner was the
global distributed algorithm. It is not very apparent in
this example, but based on our extensive testing with
randomly generated task durations, it appears that the
full scheduling constraint is a strong requirement and
that all four algorithms do not perform as well in the
priority full scenario as in the best fill scenario. Both
scenarios indicate that the greedy distributed algorithm
is a viable algorithm to use for scheduling many tasks
when fixed time intervals between task executions is
not an essential tracking requirement.
B. Best Fill Scenario
Next we tested the four algorithms in the best
fill scheduling scenario on the same tasks. The goal
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 51
TABLE II
Performance Example for Best Fill Scenario
Achieved Radar Algorithm
Algorithm Idle Time Run Time
Greedy Nondistributed 0.126 s 0.00466 s
Greedy Distributed 0.036 s 0.00843 s
Global Nondistributed 0.017 s 0.33994 s
Global Distributed 0.002 s 1.37265 s
in this section is to maximally utilize the antenna.
Equivalently, our intent is to minimize radar idle
time, the time when radar is neither transmitting
nor receiving actual pulses even though it may be
switched in a receive or transmit mode. Table II
summarizes the performance of the four algorithms
for the eight tasks with request periods and durations
listed above.
The global algorithms yielded less idle time
than the greedy algorithms, as was intended in this
example. In our extensive testing with randomly
selected task durations the greedy distributed
algorithm outperformed the global nondistributed
algorithm more often than not. The processing time
of the global algorithms is forbiddingly long such
that we could not test global algorithms with cases
involving many more tasks.
However, the four algorithms in the priority full
scenario were implemented in the C++ programming
language and the execution speed gain of going
from MATLAB to C++ allowed for testing the
performance of the algorithms on many more tasks.
C. Tracking Covariance Updates
It is clear from the formulation of all the
scheduling scenarios we considered that the request
periods for all tasks are periodic but the executions
are not. However, all tasks are executed within their
request period and therefore we can say that the
executions are semi-periodic. We indicated in the
Introduction that this will not have a significant effect
on the size of track covariances when a Kalman
filter is employed to obtain track state updates. We
now give an example to support our claims. Figure 4
presents examples of two targets with request periods
of 0.25 s.
The first target track is updated periodically in the
middle of its request period (solid line) and the second
target state is updated at a random point in each
request period. A Kalman filter is used in both cases.
In each case the state covariance is initialized at the
same level. The plot shows a clear difference between
the two update methods. Whenever two consecutive
updates are longer than the average update period
of 0.25 s in the case of semi-periodic updates, the
covariance reduction is smaller than that obtained
with strictly periodic updates. But when updates in
the irregular case are separated, on the average, by
Fig. 4. Tracking covariance comparison for regular and irregular
updates.
less than 0.25 s, the size of the covariance quickly
catches up with the covariance obtained from strictly
periodic updates. The figure also shows that the
square root of the trace of both covariances quickly
converges to the process noise level. Therefore,
the benefits of obtaining denser schedules by the
algorithms in this paper are not diminished by the
semi-periodic update rates of track states. This fact
makes these algorithms important additions to the
suite of scheduling algorithms that already exist in
the literature.
VI. IMPLEMENTATION
The goal of the scheduling algorithms
implementation was to create a modular, easy to
use, efficient library to test out the algorithms and
allow for incorporation into other applications.
To satisfy this goal, the priority based versions of
the four scheduling algorithms were implemented
in C++ under the GNU GCC 3.4 compiler using
object-oriented design principles with Standard
Template Library (STL) containers. Furthermore, the
global versions incorporated GLPK, v4.11, to solve
the integer program.
The components of the implementation consisted
of the following.
1) Scheduling Task: Class containing an id,
required execution time, and repetition interval.
2) Schedule: Class containing the results of the
execution of one of the scheduling algorithms and
associated utility methods.
3) Scheduler: The base class for all scheduling
algorithms. Defines publicly callable functions and
common behavior.
4) Greedy Scheduler: Greedy nondistributed
algorithm implementation.
5) Global Scheduler: Global nondistributed
algorithm implementation.
6) Greed Distributed Scheduler: Greedy
distributed algorithm implementation.
7) Global Distributed Scheduler: Global
distributed algorithm implementation.
52 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013
TABLE III
Scheduler Benchmark Results for Priority Full Scenario
Greedy Avg Alg Run Time (s) Avg % Resource Usage Avg % of Tasks Scheduled
2 0.000033 49.942 59.5
5 0.000076 69.9 71.8
7 0.000103 78.64 75.57
10 0.000127 80.88 81.4
100 0.001186 89.82 83.09
1000 0.014153 90.29 82.35
Greedy Distributed
2 0.000033 51.52 61.5
5 0.000102 77.507 81.6
7 0.000147 85.676 82.85
10 0.000227 87.044 87.6
100 0.008189 96.49 89.17
1000 0.650835 97.37 89.946
Global
2 0.000268 49.942 59.5
5 0.0383 71.05 74.4
7 6.1 80.89 81
10 DNF
100 DNF
1000 DNF
Global Distributed
2 0.000392 54.58 65.5
5 0.03554 79.89 85.8
7 0.0889 88.48 86.86
10 0.05222 89.166 91
100 DNF
1000 DNF
Each of the scheduler classes can be initialized and
used through the common base class allowing users
of the library not to have to globally specify which
algorithm is being used.
VII. BENCHMARK
In order to realize the effectiveness of the
algorithms, a benchmark was created and executed
to determine performance and execution time under
a heavy load. The tasks were generated so that the
average execution time would be equal to 100%
of the schedulers time, forcing it not to be able to
schedule all tasks. This was chosen since the goal of
these algorithms is to increase the number of dwells
scheduled under high load. The tasks were generated
using the following algorithm, where N is the total
number of tasks to generate:
1) Define a set of repetition intervals.
2) Generate N tasks such that each has an interval
randomly selected from the set.
3) Calculate an average execution time based on
the total number of tasks and selected intervals.
4) Randomly select an execution time between
0.001 (small number > 0) seconds and twice theaverage execution time.
The benchmark was run on a Pentium Dual Xeon2.65 GHz with 1 GB ram running Fedora Core3 Linux 32-bit OS. The test was single threadedand, allowed one processor for the test and oneto handle system tasks. The interval set selectedwas f0:25,0:5,1:0,2:0g. The test was executed100 times for each algorithm for task lists of sizef2,5,7,10,100,1000g. Each of the schedulingalgorithms was executed sequentially using identicaltasks before moving on to the next run. The averagetime, average resource usage (percentage of timespent executing tasks over the schedule), and averagepercentage of tasks scheduled were computed for eachof the algorithms for each of the task list sizes.Table III depicts the results of each of the sets
of runs. As discussed earlier in this paper, theresults show that the distributed approaches farebetter then the nondistributed approaches, the globalapproaches fare better then the greedy approaches.They also show that under a heavy work load,the greedy distributed approach is able to keepits execution time to a fraction of a second whileperforming within a few percentage points of theglobal distributed approach. Did not finish (DNF)within 1 hour indicates the GLPK could not handlethe computational complexity of the algorithms andthe computation of a feasible solution did not finish.
SPASOJEVIC, ET AL.: DWELL SCHEDULING ALGORITHMS FOR PHASED ARRAY ANTENNA 53
VIII. CONCLUSION
We presented optimal and computationally
efficient suboptimal dwell scheduling algorithms
for two scheduling scenarios, best fill and priority
full. We also derived an expression for optimally
interleaving pulses in a dwell transmitted toward
one target in order to minimize dwell length and
thus achieve denser dwell schedules. For each
scheduling scenario, the algorithms were further
subdivided into nondistributed and distributed
algorithms based upon whether some tasks could be
scheduled across boundaries of elementary intervals
contained within their request periods resulting
in a slightly irregular update rates. We examined
the performance of all algorithms and, based on
our tests, the global algorithms produced denser
schedules, as expected, but at the significant sacrifice
of processing time for computing the schedule.
Distributed algorithms scheduled tasks more optimally
than their nondistributed counterparts. In applications
where fast processing time and efficient scheduling
performance are desired, the greedy distributed
algorithm may be the scheduling algorithm of choice
for both the best fill and priority full scenarios.
However, if one wants to examine phased array
antenna’s performance limits then one would apply
global algorithms. Global algorithms can also be used
in offline testing of scheduling efficiency of greedy
algorithms. We used the open-source integer program
solver GLPK which is known for its slowness.
There are commercial packages which are orders of
magnitude faster than GLPK. Using them to solve the
linear programs formulated in this paper may further
expand the domain of applicability of our global
algorithms.
Zoran Spasojevic received a Ph.D. in mathematics at the University of Wisconsin—Madison in 1994.
He spent 2 years as a Postdoctoral Fulbright Fellow in mathematics at the Hebrew University of
Jerusalem—Israel from 1994—1996. He then spent 3 years in the Mathematics Department at the Massachusetts
Institute of Technology as a National Science Foundation Postdoctoral Fellow. Since then, he has been
conducting research in mathematics at MIT-Lincoln Laboratory. His areas of interest are scheduling and
optimization, orbital dynamics, and image processing.
Scot DeDeo received a B.S. degree in computer science in 2003 and a M.S. degree in computer science in 2006
from Worcester Polytechnic Institute.
He worked at MIT Lincoln Laboratory from 2003—2011 as a software engineer on projects ranging from
missile defense simulations to real-time radar systems. Since then he has left defense and is now a senior
developer for Cisco Systems working on their Enterprise Contact Center application.
Reed Jensen received a B.S. degree in physics from Brigham Young University in 2005.
Since 2005 he has worked on the research staff at MIT Lincoln Laboratory as a systems analyst.
REFERENCES
[1] Baugh, R. A.
Computer Control of Modern Radars.
New York: RCA, 1973.
[2] Van Keuk, G. and Blackman, S. S.
On phased array tracking and parameter control.
IEEE Transactions on Aerospace and Electronic Systems,
29, 1 (Jan. 1993), 186—194.
[3] Cai, Q. Y., Guo, Y., and Zhang, B. Z.
Phased Array Radar Data Processing and Its Simulation
Techniques.
Peoples Republic of China: National Defence Industry
Press, 1997.
[4] Shih, C. S., et al.
Synthesizing task periods for dwells in multi-function
phased array radars.
Proceedings of the IEEE Radar Conference, Philadelphia,
PA, 2004, pp. 145—150.
[5] Gopalkrishnan, S., et al.
Finite horizon scheduling of radar dwells with online
template construction.
Proceedings of the 25th IEEE International Real Time
System Symposium, 2004, pp. 23—33.
[6] Farina, A. and Neri, P.
Multitarget interleaved tracking for phased array radar.
IEE Proceedings F–Communication, Radar and Signal
Processing, 127, 4 (1980), 312—318.
[7] Cheng, T., He, Z., and Tang, T.
Dwell scheduling of multifunction phased array radars
based on genetic algorithm.
Proceedings of 2007 International Symposium on Intelligent
Signal Processing and Communication Systems, Nov.
28—Dec. 1, 2007.
[8] Stronberg, D.
Scheduling of track updates in phased array radars.
IEEE National Radar Conference, Ann Arbor, MI, May
1996.
[9] Vanderbei, R. J.
Linear Programming, (2nd ed.).
Norwell, MA: Kluwer Academic Publishers, Oct. 2001.
[10] GNU Linear Programming Kit 4.11
http://www.gnu.org/software/glpk.
[11] Coffman, Jr., E. G., Garey, M. R., and Johnson, D. S.
Approximation Algorithms for Bin Packing.
Boston, MA: PWS Publishing Company, 1997.
54 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 49, NO. 1 JANUARY 2013