Function (Vector) Spaces


Vector spaces are one of the most fundamental and important algebraic structures that are used far beyond math and physics. This algebraic structure has appeared in many real world problems and is therefore known for centuries.

In this post, we study specific vector spaces where the vectors are not tuples but functions. This raises several challenges since general function spaces are infinite dimensional and concepts like basis and linear independence might be reconsidered. We will, however, focus on mechanics of a function space without diving too deep into the realm of infinite dimensional vector spaces and its specifics.

The branch of math that studies function spaces is called functional analysis. For those who have no exposure to functional analysis, the introduction series to functional analysis provided by The Bright Side of Mathematics on YouTube or one of the book from the literature might help to get you up to speed.

Recap of Vector Spaces

Let us start with the two pivotal definitions of this post.

Definition 1.1 (Vector Space)
Let \mathbb{F}\in { \mathbb{C}, \mathbb{R} } be a field. A \mathbb{F} vector space is a set V of vectors along with an operation

    \begin{align*} +: V \times V \rightarrow V \end{align*}

and a function, called scalar multiplication,

    \begin{align*} \cdot: \mathbb{F} \times V \rightarrow V \end{align*}

such that the following axioms are fulfilled.

(i) Addition: (V, +) is an Abelian group. That is,
(a) (u+v)+w=u+(v+w) for all u, v, w\in V
(b) \exists 0\in V such that 0+v=v+0=v for all v\in V
(c) \forall v\in V \exists -v\in V such that -v+v=v+(-v)=0
(d) \forall v,w\in V: v+w=w+v

(ii) Compatibility of scalar multiplication with field multiplication:
(ab)\cdot v = a\cdot (b\cdot v) for all a,b\in \mathbb{F} and all v\in V.

(iii) Distributive property:
a \cdot (v_1+v_2) = a \cdot (v_1+v_2) and
(a+b)\cdot v = av+bv for all a,b\in \mathbb{F} and all v,v_1, v_2\in V.

The elements of \mathbb{F} are called scalars and the elements of V are called vectors.


Every vector space consists of a field \mathbb{F} and an Abelian group (V,+). These two mathematical structures are entangled with each other via (ii) and (iii).

There are two additions defined. One addition for the scalar field \mathbb{F} and another for the set of vectors V. Therefore, there are also two corresponding null elements defined – the null vector and the null element of the field. We nonetheless use for both additions the same symbol + since it is obvious to see, which one is meant. A similar treatment is applied to similar situations. For instance, there is little danger that the zero scalar can be confused with the zero vector, so no attempt is made to distinguish them.

Let us first consider classical vector space examples before diving in to function spaces.

Example 1.1 (Real finite-dimensional vector space \mathbb{R}^d)
Let V:=\mathbb{R}^d, d\in \mathbb{N} the tuples with real entries and \mathbb{F}:=\mathbb{R}. For a\in \mathbb{R} as well as x,y\in \mathbb{R}^d with x=(x_1, \ldots, x_d) and y=(y_1, \ldots, y_d), let us define the addition and scalar multiplication as follows.

    \begin{align*} x +  y &:= (x_1+y_1, \ldots, x_d+y_d),\\ a\cdot x &:= (ax_1,  \ldots, ax_d). \end{align*}

The axioms follow almost immediately by the properties of \mathbb{R} due to the definition of the addition of vectors and the scalar multiplication. The additive inverse of a vector x is simply -x since x+(-x)=(x_1-x_1, \ldots, x_d-x_d)=(0, \ldots, 0). The null vector is 0:=\underbrace{(0, \ldots, 0)}_{d \text{ entires}}, for example.

Requirements (ii) and (iii) can also be shown by using the properties of \mathbb{R} and the definition of the operation and the scalar multiplication. Let us prove (ii) (ab)\cdot v=a \cdot (b \cdot v), for instance. By definition the following holds true:

    \begin{align*} (ab) \cdot v &= (ab v_1, \ldots, ab v_d) \\              &= a \cdot (b v_1, \ldots, b v_d) \\              &= a \cdot (b  \cdot (v_1, \ldots, v_d))\\              &= a \cdot (b  \cdot v) \end{align*}

for all a,b\in \mathbb{R} and v\in V.


The last example is the raw model for the imagination of finite-dimensional vector spaces. The focus of this post is, however, on function spaces and their algebraic structure. We are particularly interested in function spaces that are (infinite dimensional) vector spaces.

Functions are employed everywhere in mathematics. Let us also recap its definition.

Definition 1.1 (Function)
A function from a set X to a set Y is an assignment of an element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain or range of the function.


The important feature of a function is, that every element of X needs to be assigned to an element of Y.

Example 1.2: (Homorphism Function Space)
If V is a vector space, then so is the set of functions V^X:= \{f:X \rightarrow V\} for any set X with

    \begin{align*} (f+g)(x) &:= f(x)+g(x),\\ (\lambda f)(x) &:= \lambda \cdot f(x). \end{align*}

The zero of V^A is 0(x) :=0, and the negatives are (-f)(x):= - f(x). The addition of two functions f, g\in V^X is defined by (f+g)(x)=f(x)+g(x). Since f(x) and g(x) are two vectors in V, the sum also needs to be a vector in V. Hence, by using the properties of the given objects it is not too hard to show that the given set V^X is indeed a vector space. Also refer to Function space in Linear Algebra as well as to Dualraum (in German). In latter source the more general function space Hom(V, W) is introduced.

Let us now have another glimpse on a (finite-dimensional) function vector space.

Example 1.2: (Polynomials of finite degree)
The space of all polynomials with degree less than or equal to n\in \mathbb{N} is a finite-dimensional vector space as outlined by Dr. Trefor Bazett. Also check out the following video.

Very illustrative and well-explained introduction to vector space incl. polynomial spaces

Another video outlines how the concepts of linear independence, span, etc. work in the space of polynomials with finite dimension.

How to think about polynomials as vectors?


Infinite Dimensional Function Spaces

Let us start with the most general case.

Consider the set F[a,b] of all functions f:[a,b] \rightarrow \mathbb{F}, a,b\in \mathbb{R} \cup \{\pm \infty\} and \mathbb{F} \in \{\mathbb{R}, \mathbb{C}\}. We define the following operations on F[a,b].

(1)   \begin{align*}  f+g:[a,b] &\rightarrow \mathbb{F}, \text{ defined by } \\ (f+g)(x) &:= f(x)+g(x) \quad \forall x\in [a,b] \\ \lambda f:[a,b] &\rightarrow \mathbb{F}, \text{ defined by } \\ (\lambda f)(x) &:= \lambda f(x) \quad \forall x\in [a,b]  \end{align*}

We can check that all the axioms of a vector space are fulfilled. Note that a could also be set to -\infty and b to +\infty.

Example 2.1 (Set of all \mathbb{R}-valued functions on [a,b])
Let us check that the operations as defined in (1) are closed. If \lambda_1, \lambda_2\in \mathbb{R} and f_1,f_2\in F[a,b] then we can form the sum and the result \lambda_1 f(x)+\lambda_2 f_2(x) \in F[a,b] is still a function on the same interval.

Due to the fact that the associative property holds true in \mathbb{R} and given that (f_1+f_2)(x)+f_3(x) = f_1(x)+f_2(x)+f_3(x), we can imply (f_1+f_2)(x)+f_3(x)=f_1(x)+(f_2+f_3)(x) by using the definition (1).

Hence, (f_1 + f_2)(x) + f_3(x) = f_1(x) + (f_2+f_3)(x) for all f_1, f_2, f_3\in F[a,b] holds true.

The null vector is the null function 0:[a,b] \rightarrow \{0\} mapping every element to the zero element of \mathbb{R}. Again, just apply the definition of the operation as well as the properties of \mathbb{R}.

The additive inverse of f(x) is -f(x), such that f(x)+(-f(x)) = 0 = -f(x)+f(x) for all f\in F[a,b].

Apparently, (f_1+f_2)(x)=f_1(x)+f_2(x)=f_2(x)+f_1(x)=(f_2+f_1)(x) holds true for all f_1, f_2\in F[a,b].

The set of all function F[a,b] on the interval [a,b] is therefore an Abelian group. The other axioms can be proved similarly by just using the properties of the field \mathbb{R} and the corresponding definition (1).

Thus, the set of all functions F[a,b] is a vector space.


The role of the last example is usually considered to be small due to its generality. However, if we restrict the set F[a,b] and thus add more structure and corresponding properties, then the situation will drastically change.

Each of the following sets of functions together with the operations (1) form an interesting and useful vector spaces. These function spaces are used heavily, for instance, in Approximation Theory and Functional Analysis:

  • Set of all continuous real functions C[a,b] defined on an interval [a,b];
  • Set of all polynomials \mathbb{R}[X] defined on the reals;
  • Set of all differentiable functions D[a,b] defined on an interval [a,b];
  • Set of all integrable functions Int[a,b] defined on an interval [a,b];
  • Set of all differentiable functions that are solutions of a differential equation of the form af''(x)+bf'(x)+cf(x)=0.

Let us consider two concrete examples a bit more detailed.

Example 2.2 (Polynomial & Continuous Function Space)
The space of all polynomials with one variable over the rationals \mathbb{Q}[X] or the reals \mathbb{R}[X] as well as the space of all continuous real functions C[a, b] defined on [a, b], a, b\in \mathbb{R}, are infinite dimensional vector spaces.

Note that all arguments that have been used in Example 2.1 can be re-used except the closure argument. That is, we only need to show that the sum of two elements and the scalar multiplication lies again in the corresponding function space.

The sum of two continuous real functions defined on [a,b] is again a continuous real function on the same domain. This can be proved by using, for instance, the interconnection between continuous functions and the existence of limits. The closure of scalar multiplication can be shown in a similar way.

The addition of two polynomials P_1 and P_2 is again a polynomial of degree \max\{ \text{deg}(P_1), \text{deg}(P_2) \}. This becomes obvious if you consider a polynomial as a series of its coefficient.

The vector space is infinite dimensional since P[\mathbb{R}] contains polynomials of arbitrary degree. That is, you can find a set of polynomials such as \{1, X, X^2, X^3, \ldots\} that are linearly independent and generates the entire vector space P[\mathbb{R}] (i.e. it is an infinite basis).


Let us have an illustrated overview on a collection of the most important facts on the beautiful interplay between linear algebra and analysis provided by Dr. Trefor Bazett.

Good outline how to think about function spaces.

It is usually helpful to have an intuitive understanding of a mathematical object such as a function space.

Norm / Length of a Function

Inner products and norms enables us to define and apply geometrical terms such as length, distance and angle. These concepts can be very illustrative in the Euclidean space, however, what does the length of a function mean?

Actually, the definition of a norm on a function space is quite similar to what we had on the Euclidean space.

Definition 3.1 (L_p norms on function spaces)
The L_p norm for functions f:[a,b] \rightarrow \mathbb{R} is defined by

(2)   \begin{align*}  ||f||_p := \sqrt[p]{\int_{a}^{b}{|f(x)|^p \ dx}}. \end{align*}


A L_p norm can be thought of as the length of f. Let us illustrate the L_1 norm for functions in the following example.

Example 3.1 (L_1 norm for functions)
The L_1 norm on function spaces is defined by

(3)   \begin{align*}  ||f||_1 := \int_{a}^{b}{|f(x)| \ dx}. \end{align*}

for any continuous function f:[a, b] \rightarrow \mathbb{R}. It is ensured that a continuous and bounded function is integrable. Note, however, that ||f||_1=0 not when f=0 but when f=0 almost everywhere. The failure of this axiom, however, can be overcome by defining equivalence classes [f]. Nonetheless, we are going to use the notation f while keeping in mind that for the L_1 norm this is actually an equivalence class. Refer to Remark 2.23 in [1] for further details.

Let us now look at simple example to better understand the heuristic. To this end, we set [a,b]:=[0,1].

    \begin{align*} ||c||_1 = \int_{0}^{1}{|c(x)| \ dx} = |c| \in \mathbb{R}. \end{align*}

So the length of a constant function on [0,1] using the L_1 norm is exactly |c|. The length of the function x\mapsto x^n with n\in \mathbb{N} is

    \begin{align*} ||x^n||_1 = \int_{0}^{1}{|x^n| \ dx} = |\frac{1}{n+1} x^{n+1}| {\big\vert}_{0}^{1} = \frac{1}{n+1}. \end{align*}

Apparently, the length of the function can somehow be associated with the area under the function graph. Hence, if we extend the integral to [a,b]:=[-1,1], we get for the function the following:

    \begin{align*} ||x||_1 = \int_{-1}^{1}{|x| \ dx} = |\frac{1}{2} x^2| {\big\vert}_{-1}^{1} = 1. \end{align*}


The approximation of functions is a very crucial technique, that is used not only in Analysis, Numerical Math and Computer Science. Just think about how continuous functions such as the \log function is calculated in the computer you are using.

If we need to approximate a function f with a function g, it is therefore of great importance to have a measure of “closeness” between these two functions. We can simply use definition (3) and slightly modify it to get a suitable measure of closeness between two suitable functions.

(4)   \begin{align*}  ||f-g||_p := \sqrt[p]{\int_{a}^{b}{|(f-g)(x)|^p \ dx}}. \end{align*}

Please also refer to the following video for a nice illustration of these concepts.

Illustration of L_p norms for function spaces

In functional analysis, where Banach spaces are studied, the features of metrics and norms are of utter importance. For instance, the completeness of these function spaces play a crucial role as outlined in Banach and Fr├ęchet spaces of functions.



Muscat, J. (2014) Functional Analysis: An Introduction to Metric Spaces, Hilbert Spaces, and Banach Algebras. 1st ed. 2014. Cham: Springer International Publishing : Imprint: Springer. Available at:

Bauer, H. (1992) Maß und Integrationstheorie. 2. überarbeit. Aufl. Berlin: W. de Gruyter (De Gruyter Lehrbuch).

Kadets, V. (2018) A Course in Functional Analysis and Measure Theory. 1st ed. 2018. Cham: Springer International Publishing : Imprint: Springer (Universitext). Available at:

Farenick, D. (2016) Fundamentals of functional analysis. New York, NY: Springer Science+Business Media.