+ All Categories
Home > Documents > Frobenius Normal Form

Frobenius Normal Form

Date post: 06-Nov-2015
Category:
Upload: man
View: 259 times
Download: 8 times
Share this document with a friend
Description:
1. From Wikipedia, the free encyclopedia2. Lexicographical order
26
Frobenius normal form From Wikipedia, the free encyclopedia
Transcript
  • Frobenius normal formFrom Wikipedia, the free encyclopedia

  • Contents

    1 Finite-dimensional von Neumann algebra 11.1 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    2 Flag (linear algebra) 22.1 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Stabilizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Subspace nest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 Set-theoretic analogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    3 Flat (geometry) 43.1 Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3.1.1 By equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.1.2 Parametric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3.2 Operations and relations on ats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2.1 Intersecting, parallel, and skew ats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2.2 Join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2.3 Properties of operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.3 Euclidean geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4 Frame (linear algebra) 74.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.1.1 Riemannian geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    5 Frame (signal processing) 85.1 Application of frames in signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85.3 Mathematical form of a frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    5.3.1 Parsevals identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    i

  • ii CONTENTS

    5.3.2 Frame operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.4 Relation to bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    5.5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.5.2 Specic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    5.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    6 Fredholm alternative 116.1 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.2 Integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.3 Functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126.4 Elliptic partial dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    7 Fredholms theorem 157.1 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157.2 Integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157.3 Existence of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    8 Frobenius normal form 178.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188.3 General case and theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198.4 A rational normal form generalizing the Jordan normal form . . . . . . . . . . . . . . . . . . . . . 198.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    8.7.1 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    9 Fusion frame 219.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.2 Local frame representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.3 Fusion frame operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.7 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    9.7.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.7.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.7.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

  • Chapter 1

    Finite-dimensional von Neumann algebra

    In mathematics, von Neumann algebras are self-adjoint operator algebras that are closed under a chosen operatortopology. When the underlying Hilbert space is nite-dimensional, the von Neumann algebra is said to be a nite-dimensional von Neumann algebra. The nite-dimensional case diers from the general von Neumann algebras inthat topology plays no role and they can be characterized using Wedderburn's theory of semisimple algebras.

    1.1 DetailsLet Cn n be the n n matrices with complex entries. A von Neumann algebra M is a self adjoint subalgebra inCn n such thatM contains the identity operator I in Cn n.Every suchM as dened above is a semisimple algebra, i.e. it contains no nilpotent ideals. Suppose M 0 lies in anilpotent ideal ofM. SinceM* M by assumption, we haveM*M, a positive semidenite matrix, lies in that nilpotentideal. This implies (M*M)k = 0 for some k. So M*M = 0, i.e. M = 0.The center of a von Neumann algebraM will be denoted by Z(M). SinceM is self-adjoint, Z(M) is itself a (commu-tative) von Neumann algebra. A von Neumann algebra N is called a factor if Z(N) is one-dimensional, that is, Z(N)consists of multiples of the identity I.Theorem Every nite-dimensional von Neumann algebraM is a direct sum of m factors, where m is the dimensionof Z(M).Proof: By Wedderburns theory of semisimple algebras, Z(M) contains a nite orthogonal set of idempotents (pro-jections) {Pi} such that PiPj = 0 for i j, Pi = I, and

    Z(M) = iZ(M)Piwhere each Z(M)Pi is a commutative simple algebra. Every complex simple algebras is isomorphic to the full matrixalgebra Ck k for some k. But Z(M)Pi is commutative, therefore one-dimensional.The projections Pi diagonalizesM in a natural way. For M M, M can be uniquely decomposed into M = MPi.Therefore,

    M = iMPi:

    One can see that Z(MPi) = Z(M)Pi. So Z(MPi) is one-dimensional and eachMPi is a factor. This proves the claim.For general von Neumann algebras, the direct sum is replaced by the direct integral. The above is a special case ofthe central decomposition of von Neumann algebras.

    1

  • Chapter 2

    Flag (linear algebra)

    In mathematics, particularly in linear algebra, a ag is an increasing sequence of subspaces of a nite-dimensionalvector space V. Here increasing means each is a proper subspace of the next (see ltration):

    f0g = V0 V1 V2 Vk = V:

    If we write the dim Vi = di then we have

    0 = d0 < d1 < d2 < < dk = n;

    where n is the dimension of V (assumed to be nite-dimensional). Hence, we must have k n. A ag is called acomplete ag if di = i, otherwise it is called a partial ag.A partial ag can be obtained from a complete ag by deleting some of the subspaces. Conversely, any partial agcan be completed (in many dierent ways) by inserting suitable subspaces.The signature of the ag is the sequence (d1, dk).Under certain conditions the resulting sequence resembles a ag with a point connected to a line connected to asurface.

    2.1 BasesAn ordered basis for V is said to be adapted to a ag if the rst di basis vectors form a basis for Vi for each 0 i k. Standard arguments from linear algebra can show that any ag has an adapted basis.Any ordered basis gives rise to a complete ag by letting the Vi be the span of the rst i basis vectors. For example,the standard ag in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ithslot and 0s elsewhere. Concretely, the standard ag is the subspaces:

    0 < he1i < he1; e2i < < he1; : : : ; eni = Kn:

    An adapted basis is almost never unique (trivial counterexamples); see below.A complete ag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplyingeach vector by a unit (scalar of unit length, like 1, 1, i). This is easiest to prove inductively, by noting that vi 2V ?i1 < Vi , which denes it uniquely up to unit.More abstractly, it is unique up to an action of the maximal torus: the ag corresponds to the Borel group, and theinner product corresponds to the maximal compact subgroup.[1]

    2

  • 2.2. STABILIZER 3

    2.2 StabilizerThe stabilizer subgroup of the standard ag is the group of invertible upper triangular matrices.More generally, the stabilizer of a ag (the linear operators on V such that T (Vi) < Vi for all i) is, in matrix terms,the algebra of block upper triangular matrices (with respect to an adapted basis), where the block sizes di di1 .The stabilizer subgroup of a complete ag is the set of invertible upper triangular matrices with respect to any basisadapted to the ag. The subgroup of lower triangular matrices with respect to such a basis depends on that basis, andcan therefore not be characterized in terms of the ag only.The stabilizer subgroup of any complete ag is a Borel subgroup (of the general linear group), and the stabilizer ofany partial ags is a parabolic subgroup.The stabilizer subgroup of a ag acts simply transitively on adapted bases for the ag, and thus these are not uniqueunless the stabilizer is trivial. That is a very exceptional circumstance: it happens only for a vector space of dimension0, or for a vector space over F2 of dimension 1 (precisely the cases where only one basis exists, independently of anyag).

    2.3 Subspace nestIn an innite-dimensional space V, as used in functional analysis, the ag idea generalises to a subspace nest, namelya collection of subspaces ofV that is a total order for inclusion and which further is closed under arbitrary intersectionsand closed linear spans. See nest algebra.

    2.4 Set-theoretic analogsFurther information: Field with one element

    From the point of view of the eld with one element, a set can be seen as a vector space over the eld with oneelement: this formalizes various analogies between Coxeter groups and algebraic groups.Under this correspondence, an ordering on a set corresponds to a maximal ag: an ordering is equivalent to a maximalltration of a set. For instance, the ltration (ag) f0g f0; 1g f0; 1; 2g corresponds to the ordering (0; 1; 2) .

    2.5 See also Filtration (mathematics) Flag manifold Grassmannian

    2.6 References[1] Harris, Joe (1991). Representation Theory: A First Course, p. 95. Springer. ISBN 0387974954.

    Shafarevich, I. R.; A. O. Remizov (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9.

  • Chapter 3

    Flat (geometry)

    In geometry, a at is a subset of n-dimensional space that is congruent to a Euclidean space of lower dimension.The ats in two-dimensional space are points and lines, and the ats in three-dimensional space are points, lines, andplanes. In n-dimensional space, there are ats of every dimension from 0 to n 1.[1] Flats of dimension n 1 arecalled hyperplanes.Flats are similar to linear subspaces, except that they need not pass through the origin. If Euclidean space is consideredas an ane space, the ats are precisely the ane subspaces. Flats are important in linear algebra, where they providea geometric realization of the solution set for a system of linear equations.A at is also called a linear manifold or linear variety.

    3.1 Descriptions

    3.1.1 By equationsA at can be described by a system of linear equations. For example, a line in two-dimensional space can be describedby a single linear equation involving x and y:

    3x+ 5y = 8:

    In three-dimensional space, a single linear equation involving x, y, and z denes a plane, while a pair of linear equationscan be used to describe a line. In general, a linear equation in n variables describes a hyperplane, and a system oflinear equations describes the intersection of those hyperplanes. Assuming the equations are consistent and linearlyindependent, a system of k equations describes a at of dimension n k.

    3.1.2 ParametricA at can also be described by a system of linear parametric equations. A line can be described by equations involvingone parameter:

    x = 2 + 3t; y = 1 + t z = 32 4t

    while the description of a plane would require two parameters:

    x = 5 + 2t1 3t2; y = 4 + t1 + 2t2 z = 5t1 3t2:

    In general, a parameterization of a at of dimension k would require parameters t1, , tk.

    4

  • 3.2. OPERATIONS AND RELATIONS ON FLATS 5

    3.2 Operations and relations on ats

    3.2.1 Intersecting, parallel, and skew ats

    An intersection of ats is either a at or the empty set.[2]

    If every line from the rst at is parallel to some line from the second at, then these ats are parallel. Two parallel atsof the same dimension either coincide or do not intersect; they can be described by two systems of linear equationswhich dier only in their right-hand sides.If ats do not intersect, and no line from the rst at is parallel to a line from the second at, then these are skewats. It is possible only if sum of their dimensions is less than dimension of the ambient space.

    3.2.2 Join

    For two ats of dimensions k1 and k2 there exists the minimal at which contains them, of dimension at most k1 + k2+ 1. If two ats intersect, then the dimension of the containing at equals to k1 + k2 dimension of the intersection.

    3.2.3 Properties of operations

    These two operations (referred to as meet and join) make the set of all ats in the Euclidean n-space a lattice andcan build systematic coordinates for ats in any dimension, leading to Grassmann coordinates or dual Grassmanncoordinates. For example, a line in three-dimensional space is determined by two distinct points or by two distinctplanes.Though, the lattice of all ats is not a distributive lattice. If two lines 1 and 2 intersect, then 1 2 is a point. Ifp is a point not lying on the same plane, then (1 2) + p = (1 + p) (2 + p), both representing a line. But when1 and 2 are parallel, this distributivity fails, giving p on the left-hand side and a third parallel line on the right-handside. The ambient space would be a projective space to accommodate intersections of parallel ats, which lead toobjects at innity.

    3.3 Euclidean geometryThe aforementioned facts do not depend on the structure being that of Euclidean space (namely, involving Euclideandistance) and are correct in any ane space. In a Euclidean space:

    There is the distance between a at and a point. (See for example Distance from a point to a plane and Distancefrom a point to a line.)

    There is the distance between two ats, equal to 0 if they intersect. (See for example Distance between twolines (in the same plane) and Skew lines#Distance.)

    If two ats intersect, then there is the angle between two ats, which belongs to the interval [0, /2] between0 and the right angle. (See for example Dihedral angle (between two planes).)

    3.4 See also

    N-dimensional space

    Matroid

    Coplanarity

  • 6 CHAPTER 3. FLAT (GEOMETRY)

    3.5 Notes[1] In addition, all of n-dimensional space is sometimes considered an n-dimensional at as a subset of itself.

    [2] Can be considered as 1-at.

    3.6 References Heinrich Guggenheimer (1977) Applicable Geometry,page 7, Krieger, New York. Stol, Jorge (1991), Oriented Projective Geometry, Academic Press, ISBN 978-0-12-672025-9From original Stanford Ph.D. dissertation, Primitives for Computational Geometry, available as DEC SRCResearch Report 36.

    PlanetMath: linear manifold

    3.7 External links Weisstein, Eric W., Hyperplane, MathWorld. Weisstein, Eric W., Flat, MathWorld.

  • Chapter 4

    Frame (linear algebra)

    In mathematics, a frame of a vector space V, is either of two distinct notions, both generalizing the notion of a basis:

    In one denition, a k-frame is an ordered set of k linearly independent vectors in a space; thus k n thedimension of the vector space, and if k = n an n-frame is precisely an ordered basis.

    If the vectors are orthogonal or orthonormal, the frame is called an orthogonal frame or orthonormalframe, respectively.

    In the other denition, a frame is a certain type of ordered set of vectors that spans a space. Thus k n.

    These are rarely confused and generally clear from context, as the former is a basic concept in nite-dimensionalgeometry, such as Stiefel manifolds, while the latter is most used in analysis. Further, the former must have at most asmany elements as the dimension of the space, while the latter must have at least as many elements as the dimensionof the space, so the only overlapping sets are bases.

    4.1 See also k-frame Frame of a vector space

    4.1.1 Riemannian geometry Orthonormal frame Moving frame Overcompleteness

    7

  • Chapter 5

    Frame (signal processing)

    A frame provides a way of deriving redundant, yet stable representations, of signals,[1] in other words, a frame is a setof vectors that exactly or over complete a particular normed vector space. Being able to nd solutions to systems thatemploy frames has important applications in many elds such as signal processing, mathematics, computer science,and engineering.[2]

    5.1 Application of frames in signal processingSignals can carry multiple types of information. Furthermore, by using redundant signals (frames) it is possible tocreate a simpler, more sparse representation over a family of elementary signals ( i.e. representing a signal strictlywith a set of linearly independent vectors may not always be the most compact form).[3] Frames, therefore, providerobustness. Because they provide a way of producing the same vector within a space, signals can be encoded invarious ways. This facilitates fault tolerance and resiliency to a loss of signal. Finally, redundancy can be used tomitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals. Because of the variousmathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operatortheory, linear algebra, andmatrix theory.[4]

    5.2 HistoryThe Fourier transform has been used for over a century as a way of decomposing and expanding signals.

    f(t) =1

    2

    Z +11

    f^(!) expi!t d!

    However, the Fourier transform masks key information regarding the moment of emission and duration of a signal.In 1946, D. Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, andcreated quantization while encapsulating important signal characteristics.[5] This discovery marked the rst concertedeort towards frame theory.Though Dun and Schaeer, who were studying problems related to nonharmonic Fourier series, independentlydiscovered the concept of a frame, which they called a Hilbert Space frame in 1952, it took the additional work ofDaubechies, Grossman, and Mayers study on wavelets to allow others to see potential of applying frames to a widervariety of elds. Today frames are associated with wavelets, signal and image processing, and data compression.

    5.3 Mathematical form of a frameFrames weaken Parsevals identity while still maintaining norm equivalence between a signal and its sequence ofcoecients. While assuming the signals are vectors in a vector space, it is practical to allow a frame to operate

    8

  • 5.3. MATHEMATICAL FORM OF A FRAME 9

    in a normed vector space, such as a Hilbert space, a frame can be dened by a family of vectors, (')Mi=1 , in anN-dimensional Hilbert space, H (')Mi=1 is a frame if there exist constants A and B such that 0 < A B < 1and:[6]

    Akxk2 PMi=1 jhx; 'iij2 Bkxk2 for all x 2 HThe constants A and B are considered the lower and upper frame, and the values (hx; 'ii)Mi=1 are called the framecoecients of the vector x with respect to frame ('i)Mi=1 [7]

    When A and B assume certain values, this changes the type of frame under consideration. The largest lower framebound and smallest upper frame bound are denoted by Aop; Bop . These are considered optimal frame bounds.

    1. Bessel Sequence: Any frame family that satises the right side (upper bound) of the inequality2. Tight Frame: A = B3. Parseval frame: A = B = 14. Equal Norm Frame: The existence of a constant c such that k'ik = c for all i 2 [1;M ]5. Unit Norm Frame: The existence of a constant c such that k'ik = c and c = 1 , for all i 2 [1;M ]6. Equiangular Frame: The existence of a constant c such that jh'i; 'jij = c for all i 6= j7. Exact Frame: A frame such that the removal of any ' from the set results in an incomplete span of the space.

    It holds that if a (')Mi=1 is a family of vectors in H the following properties hold:

    1. If a frame is an orthonormal basis then the frame is a Parseval frame (this does not always hold true in reverse).2. A frame is a frame for H i it spans H3. A frame is a unit norm Parseval frame i it is an orthornormal basis4. A frame is an exact frame if the frame is a basis of H and each vector within the frame is considered linearly

    independent of any other vector in the set.

    The last rule implies that frames are a more general form of a basis and that a frame is considered a basis when thelinear combination of the set can uniquely describe every point within the space.

    5.3.1 Parsevals identityKnown as Parsevals identity, if the frame is an orthonormal basis then for every x 2 H :

    kxk2 =NXi=1

    jhx; iij2

    Parsevals identity ensures that the map preserves the energy of a signal.

    5.3.2 Frame operatorThe frame operator is a critical part of a frame because it encodes certain properties that are necessary for thereconstruction of signals.If ('i)Mi=1 is a sequence of vectors in H with an associated analysis operator T, then the associated frame operatorS : H! H and can be dened as follows:

    Sx = T Tx =PM

    i=1hx; 'ii'i; for x 2 H

    The frame operator S is self-adjoint and positive denite; however, more important, this identity assures that if avector sequence forms a frame then it is invertible with a bounded inverse, which allows for the reconstruction of asignal.

  • 10 CHAPTER 5. FRAME (SIGNAL PROCESSING)

    5.4 Relation to basesA subset of a nite dimensional vector space V is a basis of V if V = span() and each vector is linearlyindependent. The aforementioned vector space spans a space if it can linearly represent every vector within thatspace simply through a linear transformation. To be more exact, as expressed above, if a vector space is equippedwith a norm and , a subset of the frame ('i)i2I ; can express every x 2 V using a unique set of scalars such thatx =

    Pi2I xi'i , then is considered a basis.[8]

    Basis functions provide a unique way of linearly expressing a function within a function space. Where a frame withina function space would ordinarily provide a non-unique representation, a basis function has exactly the proper amountof terms necessary to reproduce a particular class of functions.

    5.5 References

    5.5.1 General Kovaevi, Jelena; Chebira, Amina (2008). An Introduction to Frames. Foundations and Trends 2 (1):194. doi:10.1561/2000000006.

    Casazza, Peter; Kutyniok, Gitta; Philipp, Friedrich (2013). Finite frames : theory and applications. Berlin:Birkhuser. ISBN 978-0-8176-8372-6.

    Cahill, James; Casazza, Peter; Kutyniok, Gitta (Summer 2013). Operators and Frames. Journal of OperatorTheory 70 (1).

    Mallat, Stephani (9 October 2008). A Wavelet Tour of Signal Processing.

    5.5.2 Specic[1] Kovaevi and Chebira 2008, pg. 6

    [2] Introduction to Finite Frame Theory, pg. 1,15

    [3] Mallet 2009, pg. 1

    [4] Casazza et al. 2013, pgs. 2

    [5] Kovaevi and Chebira 2008, pg. 6

    [6] Casazza et al., pg. 14

    [7] Casazza et al. 2013, pg. 1,15

    [8] Kovavi and Chebira 2008, pg. 7

    5.6 Further reading

  • Chapter 6

    Fredholm alternative

    In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholms theorems and is aresult in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integralequations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in thespectrum of a compact operator is an eigenvalue.

    6.1 Linear algebraIf V is an n-dimensional vector space and T : V ! V is a linear transformation, then exactly one of the followingholds:

    1. For each vector v in V there is a vector u in V so that T (u) = v . In other words: T is surjective (and so alsobijective, since V is nite-dimensional).

    2. dim(ker(T )) > 0 .

    A more elementary formulation, in terms of matrices, is as follows. Given anmnmatrix A and am1 column vectorb, exactly one of the following must hold:

    1. Either: A x = b has a solution x2. Or: AT y = 0 has a solution y with yTb 0.

    In other words, A x = b has a solution (b 2 Im(A)) if and only if for any y s.t. AT y = 0, yTb = 0 (b 2 ker(AT )?) .

    6.2 Integral equationsLetK(x; y) be an integral kernel, and consider the homogeneous equation, the Fredholm integral equation,

    '(x)Z ba

    K(x; y)'(y) dy = 0

    and the inhomogeneous equation

    '(x)Z ba

    K(x; y)'(y) dy = f(x):

    The Fredholm alternative is the statement that, for every non-zero xed complex number 2 C , either the rstequation has a non-trivial solution, or the second equation has a solution for all f(x) .

    11

  • 12 CHAPTER 6. FREDHOLM ALTERNATIVE

    A sucient condition for this statement to be true is forK(x; y) to be square integrable on the rectangle [a; b] [a; b](where a and/or bmay beminus or plus innity). The integral operator dened by such aK is called a HilbertSchmidtintegral operator.

    6.3 Functional analysisResults on the Fredholm operator generalize these results to vector spaces of innite dimensions, Banach spaces.The integral equation can be reformulated in terms of operator notation as follows. Write (somewhat informally)

    T = K

    to mean

    T (x; y) = (x y)K(x; y)

    with (x y) the Dirac delta function, considered as a distribution, or generalized function, in two variables. Thenby convolution, T induces a linear operator acting on a Banach space V of functions (x) , which we also call T, sothat

    T : V ! V

    is given by

    7!

    with given by

    (x) =

    Z ba

    T (x; y)(y) dy = (x)Z ba

    K(x; y)(y) dy:

    In this language, the Fredholm alternative for integral equations is seen to be analogous to theFredholm alternativefor nite-dimensional linear algebra.The operator K given by convolution with an L2 kernel, as above, is known as a HilbertSchmidt integral operator.Such operators are always compact. More generally, the Fredholm alternative is valid whenK is any compact operator.The Fredholm alternative may be restated in the following form: a nonzero is either an eigenvalue of K, or it lies inthe domain of the resolvent

    R(;K) = (K Id)1:

    6.4 Elliptic partial dierential equationsThe Fredholm alternative can be applied to solving linear elliptic boundary value problems. The basic result is: if theequation and the appropriate Banach spaces have been set up correctly, then either

    (1) The homogeneous equation has a nontrivial solution, or

    (2) The inhomogeneous equation can be solved uniquely for each choice of data.

  • 6.5. SEE ALSO 13

    The argument goes as follows. A typical simple-to-understand elliptic operator L would be the Laplacian plus somelower order terms. Combined with suitable boundary conditions and expressed on a suitable Banach space X (whichencodes both the boundary conditions and the desired regularity of the solution), L becomes an unbounded operatorfrom X to itself, and one attempts to solve

    Lu = f; u 2 dom(L) X;

    where f X is some function serving as data for which we want a solution. The Fredholm alternative, together withthe theory of elliptic equations, will enable us to organize the solutions of this equation.A concrete example would be an elliptic boundary-value problem like

    () Lu := u+ h(x)u = f in;

    supplemented with the boundary condition

    () u = 0 on@;

    where Rn is a bounded open set with smooth boundary and h(x) is a xed coecient function (a potential, in thecase of a Schroedinger operator). The function f X is the variable data for which we wish to solve the equation.Here one would take X to be the space L2() of all square-integrable functions on , and dom(L) is then the Sobolevspace W 2,2() W01,2(), which amounts to the set of all square-integrable functions on whose weak rst andsecond derivatives exist and are square-integrable, and which satisfy a zero boundary condition on .If X has been selected correctly (as it has in this example), then for 0 >> 0 the operator L + 0 is positive, andthen employing elliptic estimates, one can prove that L+0 : dom(L) X is a bijection, and its inverse is a compact,everywhere-dened operator K from X to X, with image equal to dom(L). We x one such 0, but its value is notimportant as it is only a tool.We may then transform the Fredholm alternative, stated above for compact operators, into a statement about thesolvability of the boundary-value problem (*)-(**). The Fredholm alternative, as stated above, asserts:

    For each R, either is an eigenvalue of K, or the operator K - is bijective from X to itself.

    Let us explore the two alternatives as they play out for the boundary-value problem. Suppose 0. Then either(A) is an eigenvalue of K there is a solution h dom(L) of (L + 0) h = 1h -0+1 is an eigenvalue of L.(B) The operator K - : X X is a bijection (K - ) (L + 0) = Id - (L + 0) : dom(L) X is a bijection L+ 0 - 1 : dom(L) X is a bijection.Replacing -0+1 by , and treating the case = -0 separately, this yields the following Fredholm alternative foran elliptic boundary-value problem:

    For each R, either the homogeneous equation (L - ) u = 0 has a nontrivial solution, or the inhomogeneousequation (L - ) u = f possesses a unique solution u dom(L) for each given datum f X.

    The latter function u solves the boundary-value problem (*)-(**) introduced above. This is the dichotomy that wasclaimed in (1)-(2) above. By the spectral theorem for compact operators, one also obtains that the set of for whichthe solvability fails is a discrete subset of R (the eigenvalues of L). The eigenvalues can be thought of as resonancesthat block the solvability of the equation.

    6.5 See also Spectral theory of compact operators

  • 14 CHAPTER 6. FREDHOLM ALTERNATIVE

    6.6 References Fredholm, E. I. (1903). Sur une classe d'equations fonctionnelles. Acta Math. 27: 365390. A.G. Ramm, "A Simple Proof of the FredholmAlternative and a Characterization of the FredholmOperators",

    American Mathematical Monthly, 108 (2001) p. 855.

    Khvedelidze, B.V. (2001), Fredholm theorems for integral equations, in Hazewinkel, Michiel, Encyclopediaof Mathematics, Springer, ISBN 978-1-55608-010-4

    Weisstein, Eric W., Fredholm Alternative, MathWorld.

  • Chapter 7

    Fredholms theorem

    In mathematics, Fredholms theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory ofintegral equations. There are several closely related theorems, which may be stated in terms of integral equations, interms of linear algebra, or in terms of the Fredholm operator on Banach spaces.The Fredholm alternative is one of the Fredholm theorems.

    7.1 Linear algebraFredholms theorem in linear algebra is as follows: ifM is a matrix, then the orthogonal complement of the row spaceof M is the null space of M:

    (rowM)? = kerM:Similarly, the orthogonal complement of the column space of M is the null space of the adjoint:

    (colM)? = kerM:

    7.2 Integral equationsFredholms theorem for integral equations is expressed as follows. Let K(x; y) be an integral kernel, and considerthe homogeneous equations

    Z ba

    K(x; y)(y) dy = (x)

    and its complex adjoint

    Z ba

    (x)K(x; y) dx = (y):

    Here, denotes the complex conjugate of the complex number , and similarly for K(x; y) . Then, Fredholmstheorem is that, for any xed value of , these equations have either the trivial solution (x) = (x) = 0 or havethe same number of linearly independent solutions 1(x); ; n(x) , 1(y); ; n(y) .A sucient condition for this theorem to hold is for K(x; y) to be square integrable on the rectangle [a; b] [a; b](where a and/or b may be minus or plus innity).Here, the integral is expressed as a one-dimensional integral on the real number line. In Fredholm theory, this resultgeneralizes to integral operators on multi-dimensional spaces, including, for example, Riemannian manifolds.

    15

  • 16 CHAPTER 7. FREDHOLMS THEOREM

    7.3 Existence of solutionsOne of the Fredholm theorems closely related to the Fredholm alternative, concerns the existence of solutions to theinhomogeneous Fredholm equation

    (x)Z ba

    K(x; y)(y) dy = f(x):

    Solutions to this equation exist if and only if the function f(x) is orthogonal to the complete set of solutions f n(x)gof the corresponding homogeneous adjoint equation:

    Z ba

    n(x)f(x) dx = 0

    where n(x) is the complex conjugate of n(x) and the former is one of the complete set of solutions to

    (y)Z ba

    (x)K(x; y) dx = 0:

    A sucient condition for this theorem to hold is forK(x; y) to be square integrable on the rectangle [a; b] [a; b] .

    7.4 References E.I. Fredholm, Sur une classe d'equations fonctionnelles, Acta Math., 27 (1903) pp. 365390. Weisstein, Eric W., Fredholms Theorem, MathWorld. B.V. Khvedelidze (2001), Fredholm theorems for integral operators, in Hazewinkel, Michiel, Encyclopedia

    of Mathematics, Springer, ISBN 978-1-55608-010-4

  • Chapter 8

    Frobenius normal form

    In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in aeld F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reects aminimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and itsrepeated images under A). Since only one normal form can be reached from a given matrix (whence the canonical),a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be foundwithout any operations that might change when extending the eld F (whence the rational), notably without factoringpolynomials, this shows that whether two matrices are similar does not change upon eld extensions. The form isnamed after German mathematician Ferdinand Georg Frobenius.Some authors use the term rational canonical form for a somewhat dierent form that is more properly called theprimary rational canonical form. Instead of decomposing into a minimal number of cyclic subspaces, the primaryform decomposes into a maximal number of cyclic subspaces. It is also dened over F, but has somewhat dierentproperties: nding the form requires factorization of polynomials, and as a consequence the primary rational canonicalform may change when the same matrix is considered over an extension eld of F. This article mainly deals with theform that does not require factorization, and explicitly mentions primary when the form using factorization is meant.

    8.1 Motivation

    When trying to nd out whether two square matrices A and B are similar, one approach is to try, for each of them,to decompose the vector space as far as possible a direct sum of stable subspaces, and compare the respective actionson these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces(for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparingeigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are variousdrawbacks this has as a general method. First, it requires nding all eigenvalues, say as roots of the characteristicpolynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvaluesmight exist only in an extension of the eld one is working over, and then one does not get a proof of similarity overthe original eld. Finally A and B might not be diagonalizable even over this larger eld, in which case one mustinstead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks.But obtaining such a ne decomposition is not necessary to just decide whether two matrices are similar. The rationalcanonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible,while still allowing a very simple description of the action on each of them. These subspaces must be generated bya single nonzero vector v and all its images by repeated application of the linear operator associated to the matrix;such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under thelinear operator. A basis of such a subspace is obtained by taking v and its successive images as long as they arelinearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of amonic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notionis analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace upto isomorphism, and is independent of the choice of the vector v generating the subspace.A direct sum decomposition into cyclic subspaces always exists, and nding one does not require factoring polynomi-als. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces

    17

  • 18 CHAPTER 8. FROBENIUS NORMAL FORM

    (essentially by the Chinese remainder theorem). Therefore just having for both matrices some decomposition of thespace into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sucient to decidetheir similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions intocyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next(and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces of dimension 0). The resulting list ofpolynomials are called the invariant factors of (the K[X]-module dened by) the matrix, and two matrices are similarif and only if they have identical lists of invariant factors. The rational canonical form of a matrix A is obtained byexpressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials arethe invariant factors of A; two matrices are similar if and only if they have the same rational canonical form.

    8.2 Example

    Consider the following matrix A, over Q:

    A =

    0BBBBBBBBBB@

    1 3 1 0 2 0 0 21 1 1 1 2 1 0 12 6 4 3 8 4 2 11 8 3 1 5 2 3 30 0 0 0 0 0 0 10 0 0 0 1 0 0 01 0 0 0 2 0 0 00 0 0 0 4 0 1 0

    1CCCCCCCCCCA:

    A has minimal polynomial = X64X42X3+4X2+4X+1 , so that the dimension of a subspace generated bythe repeated images of a single vector is at most 6. The characteristic polynomial is = X8X7 5X6+2X5+10X4+2X37X25X1 , which is a multiple of the minimal polynomial by a factorX2X1 . There alwaysexist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has onthe whole space; indeed most vectors will have this property, and in this case the rst standard basis vector e1 does so:the vectors Ak(e1) for k = 0; 1; : : : ; 5 are linearly independent and span a cyclic subspace with minimal polynomial . There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated byvectors v = (3; 4; 8; 0;1; 0; 2;1)> and w = (5; 4; 5; 9;1; 1; 1;2)> is an example. In fact one has A v = w, so the complementary subspace is a cyclic subspace generated by v ; it has minimal polynomialX2X1 . Since is the minimal polynomial of the whole space, it is clear that X2 X 1 must divide (and it is easily checkedthat it does), and we have found the invariant factorsX2 X 1 and = X6 4X4 2X3 + 4X2 + 4X + 1 ofA. Then the rational canonical form of A is the block diagonal matrix with the corresponding companion matrices asdiagonal blocks, namely

    C =

    0BBBBBBBBBB@

    0 1 0 0 0 0 0 01 1 0 0 0 0 0 00 0 0 0 0 0 0 10 0 1 0 0 0 0 40 0 0 1 0 0 0 40 0 0 0 1 0 0 20 0 0 0 0 1 0 40 0 0 0 0 0 1 0

    1CCCCCCCCCCA:

    A basis on which this form is attained is formed by the vectors v; w above, followed by Ak(e1) for k = 0; 1; : : : ; 5 ;explicitly this means that for

  • 8.3. GENERAL CASE AND THEORY 19

    P =

    0BBBBBBBBBB@

    3 5 1 1 0 0 4 04 4 0 1 1 2 3 58 5 0 2 5 2 11 60 9 0 1 3 2 0 01 1 0 0 0 1 1 40 1 0 0 0 0 1 12 1 0 1 1 0 2 61 2 0 0 1 1 4 2

    1CCCCCCCCCCAhas one A = PCP1:

    8.3 General case and theoryFix a base eld F and a nite-dimensional vector space V over F. Given a polynomial p(x) F[x], there is associatedto it a companion matrix C whose characteristic polynomial is p(x).Theorem: Let V be a nite-dimensional vector space over a eld F, and A a square matrix over F. Then V (viewedas an F[x]-module with the action of x given by A and extending by linearity) satises the F[x]-module isomorphism

    V F[x]/(a1(x)) F[x]/(an(x))

    where the ai(x) F[x] may be taken to be non-units, unique as monic polynomials, and can be arranged to satisfy therelation

    a1(x) | | an(x)

    where a | b is notation for "a divides b".Sketch of Proof: Apply the structure theorem for nitely generated modules over a principal ideal domain to V,viewing it as an F[x]-module. Note that any free F[x]-module is innite-dimensional over F, so that the resultingdirect sum decomposition has no free part since V is nite-dimensional. The uniqueness of the invariant factorsrequires a separate proof that they are determined up to units; then the monic condition ensures that they are uniquelydetermined. The proof of this latter part is omitted. See [DF] for details.Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form donot exist over F[x], so the invariant factors ai(x) as given above must be used instead. These correspond to factorsof the minimal polynomial m(x) = an(x), which (by the CayleyHamilton theorem) itself divides the characteristicpolynomial p(x) and in fact has the same roots as p(x), not counting multiplicities. Note in particular that the Theoremasserts that the invariant factors have coecients in F.As each invariant factor ai(x) is a polynomial in F[x], we may associate a corresponding block matrix Ci which is thecompanion matrix to ai(x). In particular, each such Ci has its entries in the eld F.Taking the matrix direct sum of these blocks over all the invariant factors yields the rational canonical form ofA. Where the minimal polynomial is identical to the characteristic polynomial, the Frobenius normal form is thecompanion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by theunique invariant factors associated to A, and these invariant factors are independent of basis, it follows that twosquare matrices A and B are similar if and only if they have the same rational canonical form.

    8.4 A rational normal form generalizing the Jordan normal formThe Frobenius normal form does not reect any form of factorization of the characteristic polynomial, even if itdoes exist over the ground eld F. This implies that it is invariant when F is replaced by a dierent eld (as longas it contains the entries of the original matrix A). On the other hand this makes the Frobenius normal form ratherdierent from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonalform (if A is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits intolinear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just thecompanion matrix of its characteristic polynomial.

  • 20 CHAPTER 8. FROBENIUS NORMAL FORM

    There is another way to dene a normal form, that like the Frobenius normal form is always dened over the sameeld F as A, but that does reect a possible factorization of the characteristic polynomial (or equivalently the minimalpolynomial) into irreducible factors over F, and which reduces to the Jordan normal form in case this factorizationonly contain linear factors (corresponding to eigenvalues). This form[1] is sometimes called the generalized Jordannormal form, or primary rational canonical form. It is based on the fact that the vector space can be canonicallydecomposed into a direct sum of stable subspaces corresponding to the distinct irreducible factors P of the charac-teristic polynomial (as stated by the lemme des noyaux), where the characteristic polynomial of each summand isa power of the corresponding P. These summands can be further decomposed, non-canonically, as a direct sum ofcyclic F[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of eachsummand is still a (generally smaller) power of P. The primary rational canonical form is a block diagonal matrixcorresponding to such a decomposition into cyclic modules, with a particular form called generalized Jordan block inthe diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordanblock is itself a block matrix of the form

    0BBB@C 0 0U C 0... . . . . . . ...0 U C

    1CCCAwhere C is the companion matrix of the irreducible polynomial P, and U is a matrix whose sole nonzero entry isa 1 in the upper right hand corner. For the case of a linear irreducible factor P = x , these blocks are reducedto single entries C = and U = 1 and, one nds a (transposed) Jordan block. In any generalized Jordan block, allentries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtainedby choosing a generating vector v (one that is not annihilated by Pk1(A) where the minimal polynomial of the cyclicmodule is Pk), and taking as basis

    v;A(v); A2(v); : : : ; Ad1(v); P (A)(v); A(P (A)(v)); : : : ; Ad1(P (A)(v)); P 2(A)(v); : : : ; P k1(A)(v); : : : ; Ad1(P k1(A)(v))

    where d = deg(P).

    8.5 See also Smith normal form

    8.6 References [DF] David S. Dummit and Richard M. Foote. Abstract Algebra. 2nd Edition, John Wiley & Sons. pp. 442,446, 452-458. ISBN 0-471-36857-1.

    [1] Phani Bhushan Bhattacharya, Surender Kumar Jain, S. R. Nagpaul, Basic abstract algebra, Theorem 5.4, p.423

    8.7 External links Rational Canonical Form (Mathworld)

    8.7.1 Algorithms An O(n3) Algorithm for Frobenius Normal Form An Algorithm for the Frobenius Normal Form (pdf) A rational canonical form Algorithm (pdf)

  • Chapter 9

    Fusion frame

    In mathematics, a fusion frame of a vector space is a natural extension of a frame. It is an additive construct ofseveral, potentially overlapping frames. The motivation for this concept comes from the event that a signal can notbe acquired by a single sensor alone (a constraint found by limitations of hardware or data throughput), rather thepartial components of the signal must be collected via a network of sensors, and the partial signal representations arethen fused into the complete signal.By construction, fusion frames easily lend themselves to parallel or distributed processing[1] of sensor networks con-sisting of arbitrary overlapping sensor elds.

    9.1 DenitionGiven a Hilbert space H , let fWigi2I be closed subspaces of H , where I is an index set. Let fvigi2I be a set ofscalar weights. Then fWi; vigi2I is a fusion frame ofH if there exist constants 0 < A B such that for all f 2 Hwe haveAkfk2 Pi2I v2i

    PWif

    2 Bkfk2 ,where PWi denotes the orthogonal projection onto the subspaceWi .

    9.2 Local frame representationLetW H be a closed subspace, and let fxng be an orthonormal basis ofW . Then for all f 2 H , the orthogonalprojection of f ontoW is given by PW f =

    Phf; xnixn .[2]9.3 Fusion frame operatorFor nite frames (i.e., dimH =: N < 1 and jIj < 1 ), the fusion frame operator can be constructed with amatrix.[1] Let fWi; vigi2I be a fusion frame for HN , and let ffijgj2Ji be a frame for the subspaceWi and Ji anindex set for each i 2 I . With

    Fi =

    2664... ... ...fi1 fi2 fijJij... ... ...

    3775NjJij

    and

    ~Fi =

    2664... ... ...~fi1 ~fi2 ~fijJij... ... ...

    3775NjJij

    ;

    21

  • 22 CHAPTER 9. FUSION FRAME

    where ~fij is the canonical dual frame of fij , the fusion frame operator S : H ! H is given byS =

    Pi2I v

    2i Fi

    ~FTi .The fusion frame operator S is then given by an N N matrix.

    9.4 References[1] Casazza, Peter G.; Kutyniok, Gitta; Li, Shidong (2008). Fusion frames and distributed processing. Applied and Compu-

    tational Harmonic Analysis 25 (1): 114132. doi:10.1016/j.acha.2007.10.001.

    [2] Christensen, Ole (2003). An introduction to frames and Riesz bases. Boston [u.a.]: Birkhuser. p. 8. ISBN 0817642951.

    9.5 External links Fusion Frames

    9.6 See also Hilbert space

  • 9.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 23

    9.7 Text and image sources, contributors, and licenses9.7.1 Text

    Finite-dimensional von Neumann algebra Source: http://en.wikipedia.org/wiki/Finite-dimensional_von_Neumann_algebra?oldid=657001759 Contributors: Rich Farmbrough, Wavelength, SmackBot, Mct mht, David Eppstein, Yobot, AnomieBOT, Erik9bot, Fres-coBot, Deltahedron and Anonymous: 1

    Flag (linear algebra) Source: http://en.wikipedia.org/wiki/Flag_(linear_algebra)?oldid=646138391 Contributors: Charles Matthews,Jitse Niesen, Fropu, Xubia, PhotoBox, MarSch, SmackBot, Nbarth, Jon Awbrey, Thijs!bot, Frozenport, VoABot II, R'n'B, Mm06ahlf,Nilradical, Marc van Leeuwen, Addbot, LaaknorBot, Yobot, Erik9bot, LucienBOT, 4bpp, AHusain314, Monkbot and Anonymous: 7

    Flat (geometry) Source: http://en.wikipedia.org/wiki/Flat_(geometry)?oldid=646971094 Contributors: Michael Hardy, Giftlite, Tom-ruen, Rgdboer, Richwales, BD2412, KSmrq, Thunderforge, Sangwine, SmackBot, Incnis Mrsi, Jim.belk, Magioladitis, David Eppstein,Infovarius, SanderEvers, Justin W Smith, Yobot, GoingBatty, Quondum, Loraof and Anonymous: 2

    Frame (linear algebra) Source: http://en.wikipedia.org/wiki/Frame_(linear_algebra)?oldid=607155473 Contributors: Michael Hardy,Tobias Bergemann, Nbarth, Yobot, Bigweeboy, Zhangguoxian and Anonymous: 1

    Frame (signal processing) Source: http://en.wikipedia.org/wiki/Frame_(signal_processing)?oldid=642242424 Contributors: MichaelHardy, AnomieBOT, BG19bot and Sokolo11

    Fredholmalternative Source: http://en.wikipedia.org/wiki/Fredholm_alternative?oldid=663604600Contributors: Stevenj, CharlesMatthews,BenFrantzDale, Bender235, Pt, BrokenSegue,Woohookitty, Linas, R.e.b., NawlinWiki, Juryu, SmackBot, Flyingspuds, Mctmht, Thijs!bot,Daniele.tampieri, Kwjbot, Addbot, Obersachsebot and Anonymous: 22

    Fredholms theorem Source: http://en.wikipedia.org/wiki/Fredholm{}s_theorem?oldid=544030840 Contributors: SimonP, Modster,Charles Matthews, Giftlite, Kel-nage, Woohookitty, Linas, Sodin, BeteNoir, Markjdb, Sun Creator, Addbot, ZroBot and Anonymous: 2

    Frobenius normal form Source: http://en.wikipedia.org/wiki/Frobenius_normal_form?oldid=648611773 Contributors: The Anome,Michael Hardy, GTBacchus, Dysprosia, Giftlite, Gauge, Rgdboer, Oleg Alexandrov, YurikBot, SmackBot, Chris the speller, Nbarth,RomanSpa, 345Kai, CalculatinAvatar, Error792, Cuzkatzimhut, VolkovBot, AlleborgoBot, Taojun545, Marc van Leeuwen, Deineka,Addbot, Anne Bauval, Mathx17 and Anonymous: 11

    Fusion frame Source: http://en.wikipedia.org/wiki/Fusion_frame?oldid=637853830Contributors: Michael Hardy, Huon, Davidwr, Yobot,Freebirds, BG19bot, ArticlesForCreationBot, BattyBot, Dexbot, Hylem, Monkbot and Anonymous: 1

    9.7.2 Images File:Merge-arrow.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Merge-arrow.svg License: Public domain Con-

    tributors: ? Original artist: ? File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0

    Contributors:Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

    File:Wiki_letter_w.svg Source: https://upload.wikimedia.org/wikipedia/en/6/6c/Wiki_letter_w.svg License: Cc-by-sa-3.0 Contributors:? Original artist: ?

    File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:CC-BY-SA-3.0 Contributors:

    Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen

    9.7.3 Content license Creative Commons Attribution-Share Alike 3.0

    Finite-dimensional von Neumann algebraDetails

    Flag (linear algebra)BasesStabilizerSubspace nestSet-theoretic analogsSee alsoReferences

    Flat (geometry)DescriptionsBy equationsParametric

    Operations and relations on flatsIntersecting, parallel, and skew flatsJoinProperties of operations

    Euclidean geometrySee alsoNotesReferencesExternal links

    Frame (linear algebra)See also Riemannian geometry

    Frame (signal processing)Application of frames in signal processingHistoryMathematical form of a frameParsevals identityFrame operator

    Relation to basesReferencesGeneralSpecific

    Further reading

    Fredholm alternativeLinear algebraIntegral equationsFunctional analysis Elliptic partial differential equations See also References

    Fredholms theoremLinear algebraIntegral equationsExistence of solutionsReferences

    Frobenius normal formMotivation Example General case and theoryA rational normal form generalizing the Jordan normal form See also ReferencesExternal links Algorithms

    Fusion frameDefinition Local frame representation Fusion frame operator ReferencesExternal links See also Text and image sources, contributors, and licensesTextImagesContent license


Recommended