I reading an article about an Automatic scale selection It is used in image feature extraction. But what I need is the mathematical part of it. I have a problem with understanding the following:
Given an image: $f: \mathbb{R}^D \rightarrow \mathbb{R}$
And its scale-space representation: $L: \mathbb{R}^D \times \mathbb{R}_+ \rightarrow \mathbb{R}$
$L(x;t) = \int_{\xi\in\mathbb{R}^N}f(x - \xi)g(\xi)d \xi$ (convolution)
where $g: \mathbb{R}^N \times \mathbb{R}_+ \rightarrow \mathbb{R}$ denotes the Gaussian kernel.
$g(x;t) = \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} exp{\frac{-(x_1^2+\cdots x_D^2)}{2 t}}$
t is reffered to as the scale parameter.
So, edge detection: At each scale level, edges are definedfrom points at which the gradient magnitude assumes a local maximum in the gradient direction.
If the $\delta_v$ denotes a directional derivative in the gradient direction, (suppose the direction is $v$) this edge definition can be written as:
$\tilde{L}_{vv} = L_v^2 L_{vv} = L_x^2 L_xx + 2 L_x L_y L_{xy} + L_y^2 L_{yy} = 0$ $\tilde{L}_{vvv} = L_v^3 L_{vvv} = L_x^3 L_xxx + 3 L_x^2 L_y L_{xxy} + 3 L_x L_y^2 L_{xyy} + L_y^3 L_{yyy} < 0$
What I need to understand is how was the directional derivative taken? And why do they need second order and third order directional derivatives (not first and the second order)?
EDIT 1: By the way, how is it possible, that in the definition of $g$ it says that $g: \mathbb{R}^N \times \mathbb{R}_+ \rightarrow \mathbb{R}$
But then: $g(x;t) = \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} exp{\frac{-(x_1^2+\cdots x_D^2)}{2 t}}$
So we have $N + 1$ dimentional function in first case and then it looks like we have a $D + 1$ dimentions in the second case. Isn't it a mistake? This is an offitial article however..