5
$\begingroup$

If you pick a random vector in $\mathbb{R}^n$ with some fixed basis, there is no special relationship between components. The relationship between the $1^{st}$ component and the $5^{th}$ component is the same as the relationship between the $82^{nd}$ component and the $1001^{th}$ component.

On the other hand, if the space $\mathbb{R}^n$ is viewed as a discretization of a function space (eg, n nodal values for a piecewise linear basis of hat functions), then there is a special relationship between components based on nearness in the underlying domain. If 2 nodes are close in physical space, then the basis vectors corresponding to those nodes are more highly related in the function space.

So, somehow $\mathbb{R}^n$ as a function space has more structure and is different than $\mathbb{R}^n$ generically. What is this difference and how can it be made precise?

My thoughts so far are as follows: this seems similar to the ideas of function space regularity (the more regular the space, the more nearby points are "related" to each other). However I don't think this is the whole picture since one could also imagine defining additional structure on the function space over nodes in a n-node graph $\{f:G\rightarrow\mathbb{R}\}$, where there is no notion of continuity, differentiability, etc.

  • 3
    "If 2 nodes are close in physical space, then the basis vectors corresponding to those nodes are more highly related in the function space." Er. Why?2012-08-06
  • 0
    I don't really understand your example, but I think the additional structure you're looking at is a probability measure (si it's not just $\mathbb{R}^n$, but $\mathbb{R}^n$ with a given probability measure).2012-08-06
  • 0
    Anyway, one thing you can do is talk about Lipschitz functions (http://en.wikipedia.org/wiki/Lipschitz_continuity) with a _fixed_ Lipschitz constant. These don't form a vector space, but they capture the intuitive behavior you seem to want (although I don't know what you're using this for).2012-08-06
  • 0
    @JoelCohen, Yeah, the most natural choice is perhaps $\mu \tilde{~} exp(-||x||^2)$, where the norm is the function space norm (sobolev norm or besov norm for example, or lipschitz norm if you want). Not sure how to extend the idea to function spaces over a graph though.2012-08-06
  • 1
    This kinds of ideas are made precise in the theory of RHKS https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space In particular Gaussian proceses, which would coincide with the suggestion of @JoelCohen2015-09-24
  • 0
    @JuanPi Thanks, this is exactly what I'm looking for. Can you make that an answer rather than a comment so I can accept it? If I understand correctly, I could apply this to a Sobolev space by taking a power of the Laplacian as the reproducing Kernel, and for a graph, a power of the graph Laplacian.2015-09-26

1 Answers 1

1

As you said, functions define relations between the different "dimensions" of the space. This idea is made explicit via correlation functions in Gaussian processes (this is an excellent non-engineering introduction) which can be framed in the theory of Reproducing Hilbert Kernel Spaces (RHKS).

Based on your comment, I think you might want to look at the article,

Schaback, R., & Wendland, H. (2006). Kernel techniques: from machine learning to meshless methods. Acta Numerica, 1–97. http://doi.org/10.1017/S0962492904000077

Therein Sobolev spaces are studied in relation with their RHKS. This is an engineering article and not a mathematics one; nevertheless, it provides lots of references for further reading.