Beta Phase: Square45 is currently in beta testing. Expect some features or content to be incomplete or missing.
45

Properties

📜

The statement of the theorem

Two vectors u and v in a Hilbert space H are orthogonal when ⟨u, v⟩ = 0. The notation for this is u ⊥ v. More generally, when S is a subset in H, the notation u ⊥ S means that u is orthogonal to every element from S. When u and v are orthogonal, one has u+v2=u+v,u+v=u,u+2Reu,v+v,v=u2+v2.\|u+v\|^{2}=\langle u+v,u+v\rangle =\langle u,u\rangle +2\,\operatorname {Re} \langle u,v\rangle +\langle v,v\rangle =\|u\|^{2}+\|v\|^{2}\,. By induction on n, this is extended to any family u_{1}, ..., u_{n} of n orthogonal vectors, u1++un2=u12++un2.\left\|u_{1}+\cdots +u_{n}\right\|^{2}=\left\|u_{1}\right\|^{2}+\cdots +\left\|u_{n}\right\|^{2}. Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series Σu_{k} of orthogonal vectors converges in H if and only if the series of squares of norms converges, and k=0uk2=k=0uk2.{\Biggl \|}\sum _{k=0}^{\infty }u_{k}{\Biggr \|}^{2}=\sum _{k=0}^{\infty }\left\|u_{k}\right\|^{2}\,. Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken. Geometrically, the parallelogram identity asserts that AC^{2} + BD^{2} = 2(AB^{2} + AD^{2}). In words, the sum of the squares of the diagonals is twice the sum of the squares of any two adjacent sides. By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds: u+v2+uv2=2(u2+v2).\|u+v\|^{2}+\|u-v\|^{2}=2{\bigl (}\|u\|^{2}+\|v\|^{2}{\bigr )}\,. Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity. For real Hilbert spaces, the polarization identity is u,v=14(u+v2uv2).\langle u,v\rangle ={\tfrac {1}{4}}{\bigl (}\|u+v\|^{2}-\|u-v\|^{2}{\bigr )}\,. For complex Hilbert spaces, it is u,v=14(u+v2uv2+iu+iv2iuiv2).\langle u,v\rangle ={\tfrac {1}{4}}{\bigl (}\|u+v\|^{2}-\|u-v\|^{2}+i\|u+iv\|^{2}-i\|u-iv\|^{2}{\bigr )}\,. The parallelogram law implies that any Hilbert space is a uniformly convex Banach space. This subsection employs the Hilbert projection theorem. If C is a non-empty closed convex subset of a Hilbert space H and x a point in H, there exists a unique point y ∈ C that minimizes the distance between x and points in C, yC,xy=dist(x,C)=min{xzzC}.y\in C\,,\quad \|x-y\|=\operatorname {dist} (x,C)=\min {\bigl \{}\|x-z\|\mathrel {\big |} z\in C{\bigr \}}\,. This is equivalent to saying that there is a point with minimal norm in the translated convex set D = C − x. The proof consists in showing that every minimizing sequence (d_{n}) ⊂ D is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in D that has minimal norm. More generally, this holds in any uniformly convex Banach space. When this result is applied to a closed subspace F of H, it can be shown that the point y ∈ F closest to x is characterized by yF,xyF.y\in F\,,\quad x-y\perp F\,. This point y is the orthogonal projection of x onto F, and the mapping P_{F} : x → y is linear (see § Orthogonal complements and projections). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods. In particular, when F is not equal to H, one can find a nonzero vector v orthogonal to F (select x ∉ F and v = x − y). A very useful criterion is obtained by applying this observation to the closed subspace F generated by a subset S of H. A subset S of H spans a dense vector subspace if (and only if) the vector 0 is the sole vector v ∈ H orthogonal to S. The dual spaceH* is the space of all continuous linear functions from the space H into the base field. It carries a natural norm, defined by φ=supx=1,xHφ(x).\|\varphi \|=\sup _{\|x\|=1,x\in H}|\varphi (x)|\,. This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right. If e_{•} = (e_{i})_{i ∈ I} is a complete orthonormal basis for H then the inner product on the dual space of any two f,gHf,g\in H^{*} is f,gH=iIf(ei)g(ei)\langle f,g\rangle _{H^{*}}=\sum _{i\in I}f(e_{i}){\overline {g(e_{i})}} where all but countably many of the terms in this series are zero. The Riesz representation theorem affords a convenient description of the dual space. To every element u of H, there is a unique element φ_{u} of H*, defined by φu(x)=x,u\varphi _{u}(x)=\langle x,u\rangle where moreover, φu=u.\left\|\varphi _{u}\right\|=\left\|u\right\|. The Riesz representation theorem states that the map from H to H* defined by u ↦ φ_{u} is surjective, which makes this map an isometricantilinear isomorphism. So to every element φ of the dual H* there exists one and only one u_{φ} in H such that x,uφ=φ(x)\langle x,u_{\varphi }\rangle =\varphi (x) for all x ∈ H. The inner product on the dual space H* satisfies φ,ψ=uψ,uφ.\langle \varphi ,\psi \rangle =\langle u_{\psi },u_{\varphi }\rangle \,. The reversal of order on the right-hand side restores linearity in φ from the antilinearity of u_{φ}. In the real case, the antilinear isomorphism from H to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals. The representing vector u_{φ} is obtained in the following way. When φ ≠ 0, the kernelF = Ker(φ) is a closed vector subspace of H, not equal to H, hence there exists a nonzero vector v orthogonal to F. The vector u is a suitable scalar multiple λv of v. The requirement that φ(v) = ⟨v, u⟩ yields u=v,v1φ(v)v.u=\langle v,v\rangle ^{-1}\,{\overline {\varphi (v)}}\,v\,. This correspondence φ ↔ u is exploited by the bra–ket notation popular in physics. It is common in physics to assume that the inner product, denoted by ⟨x|y⟩, is linear on the right, xy=y,x.\langle x|y\rangle =\langle y,x\rangle \,. The result ⟨x|y⟩ can be seen as the action of the linear functional ⟨x| (the bra) on the vector |y⟩ (the ket). The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space H is reflexive, meaning that the natural map from H into its double dual space is an isomorphism. In a Hilbert space H, a sequence {x_{n}} is weakly convergent to a vector x ∈ H when limnxn,v=x,v\lim _{n}\langle x_{n},v\rangle =\langle x,v\rangle for every v ∈ H. For example, any orthonormal sequence {f_{n}} converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence {x_{n}} is bounded, by the uniform boundedness principle. Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem). This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on R^{d}. Among several variants, one simple statement is as follows: If f : H → R is a convex continuous function such that f(x) tends to +∞ when ‖x‖ tends to ∞, then f admits a minimum at some point x_{0} ∈ H. This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space H are weakly compact, since H is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem. Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuoussurjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces. The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set. In the case of Hilbert spaces, this is basic in the study of unbounded operators (see Closed operator). The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if y is the element of a closed convex set F closest to x, then the separating hyperplane is the plane perpendicular to the segment xy passing through its midpoint. - ^Reed & Simon 1980, Theorem 12.6 - ^Reed & Simon 1980, p. 38 - ^Young 1988, p. 23 - ^Clarkson 1936 - ^Rudin 1987, Theorem 4.10 - ^Dunford & Schwartz 1958, II.4.29 - ^Rudin 1987, Theorem 4.11 - ^Blanchet, Gérard; Charbit, Maurice (2014), Digital Signal and Image Processing Using MATLAB, vol. 1 (Second ed.), New Jersey: Wiley, pp. 349–360, ISBN978-1848216402 - ^Weidmann 1980, Theorem 4.8 - ^Peres 1993, pp. 77–78 - ^Weidmann (1980, Exercise 4.11) - ^Weidmann 1980, §4.5 - ^Buttazzo, Giaquinta & Hildebrandt 1998, Theorem 5.17 - ^Halmos 1982, Problem 52, 58 - ^Rudin 1973 - ^Trèves 1967, Chapter 18