3
$\begingroup$

I am trying to calculate the intersection point (if any) of two line segments for a 2D computer game. I am trying to use this method, but I want to make sure I understand what is going on as I do it. This method talks about using cross-products, and calculating that by using the determinant of the 2x2 matrix: (x1y2) - (x2y1).

My confusion comes from my remembering that a cross-product gives a vector perpendicular to the others... but it seems to me that calculating that determinant should just give a scalar - we end up with a single number at the end of it.

Where is my misunderstanding?

  • 1
    Smashery, to expand on Srivatsan's last comment: note that if you take the cross product of two vectors $(a,b,0)$ and $(c,d,0)$ lying in the x-y plane, the resulting cross product is \left(0,0,\begin{vmatrix}a&b\\c&d\end{vmatrix}\right).2011-09-03

5 Answers 5

3

Perhaps understanding the following definition of the cross product would eliminate your confusion: For two vectors $a$ and $b$ in $\mathbb{R}^3$ the function from $\mathbb{R}^3$ to $\mathbb{R}$ determined by the rule $c \mapsto \det[a, b, c]$ is a linear form on $\mathbb{R}^3$, that is, it is a real-valued linear function on $\mathbb{R}^3$. As such, it can be shown that there is a unique vector in $\mathbb{R}^3$ called the cross product of $a$ and $b$, denoted by $a \times b$, such that $\langle a \times b, c \rangle = \det[a, b, c]$ where $\langle \cdot, \cdot \rangle$ denotes the standard Euclidean inner product on $\mathbb{R}^3$

Not only does this definition elucidate precisely how the cross product is related to the determinant, but determining the "orientation" of the normal vector doesn't depend on the rather bizarre notion of curling your fingers around something to see which way your thumb is pointing. Also, presuming the basic properties of the determinant, many well-known properties of the cross product, which are often proved by various geometric arguments, are immediate.

0

I think there's an interesting parallel in multivariable calculus that may somehow justify the terminology (very loosely).

If $\mathbf{G} = P\mathbf{i} + Q\mathbf{j}$, and $C$ is the boundary of a simply-connected region $D$ in the plane, then Green's Theorem gives:

$ \int_C \mathbf{G} \cdot \mathrm d\mathbf{r} = \int_C P \mathrm dx + Q \mathrm dy = \iint_D \left( \frac{\partial}{\partial x} Q - \frac{\partial}{\partial y}P \right)\mathrm dA$.

Then we also have an identity for $\mathbf{F} = P\mathbf{i} + Q\mathbf{j} + R\mathbf{k}$ involving the $\mathbf{curl}$ of the field:

$ \int_C \mathbf{F} \cdot \mathrm d\mathbf{r} = \int_C P \mathrm dx + Q \mathrm dy + R \mathrm dz = \iint_D \mathbf{curl}(\mathbf{F}) \cdot \mathbf{k} \mathrm dA$.

Observe that $\dfrac{\partial}{\partial x}Q - \dfrac{\partial}{\partial y}P$ can be regarded as the determinant of $\begin{bmatrix}\frac{\partial}{\partial x} & \frac{\partial}{\partial y} \\ P & Q \end{bmatrix}$, while $\mathbf{curl}(\mathbf{F})$ is the determinant of

$\begin{bmatrix}\mathbf{i} & \mathbf{j} & \mathbf{k} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ P & Q & R\end{bmatrix}$,

which is conveniently notated $\mathbf{\nabla} \times \mathbf{F}$. This gives some (very stretched) credence to the "2-dimensional" cross-product, as defined in your question above, as we could then write:

$\mathbf{\nabla} \times \mathbf{G} = \begin{vmatrix} \frac{\partial}{\partial x} & \frac{\partial}{\partial y} \\ P & Q \end{vmatrix} = \frac{\partial}{\partial x} Q - \frac{\partial}{\partial y}P $.

Ok, I realize that in the 3-dimensional case, there was also the dot product with $\mathbf{k}$, but like I said, this is a loose connection.

0

Determinant just gives the magnitude of the cross product. Right hand thumb gives the direction.

Since You are talking about 2D vectors, let them be $\bar a= a_{x}\hat x+a_{y}\hat y$ and $\bar b=b_{x}\hat x+b_{y}\hat y$

Now $\bar a\times \bar b = (a_{x}b_{y}-a_{y}b_{x})\hat k$

The term $(a_{x}b_{y}-a_{y}b_{x})$ is the determinant you are calculating.

0

In modern mathematics, the cross-product of a sequence of vectors is defined to be the dual to the external product of these vectors. The standard situation is when we are given two three-dimensional vectors $v$ and $w$. Then $v \times w = (v \wedge w)^* = (v_2 w_3 - w_2 v_3, v_3 w_1 - v_1 w_3, v_1 w_2 - v_2 w_1)$

On the other hand if our vectors lives in a two-dimensional space then the dual to the wedge $v \wedge w$ is just a scalar: $(v \wedge w)^* = (v_1 e_1 + v_2 e_2 \wedge w_1 e_1 + w_2 e_2)^*$ $\begin{align}(v_1 e_1 + v_2 e_2 \wedge w_1 e_1 + w_2 e_2)^* &= (v_1 e_1 \wedge w_2 e_2 + v_2 e_2 \wedge w_1 e_1)^* \\&= v_1 w_2(e_1 \wedge e_2)^* + v_2 w_1 (e_2 \wedge e_1)^* \\&= v_1 w_2 - v_2 w_1\end{align}$

0

You are dealing with a completely two-dimensional situation; therefore the cross product of two vectors $x:=(x_1,x_2, x_3)$, $y:=(y_1,y_2,y_3)\in{\mathbb R}^3$ plays no rôle in the game.

In 2 dimensions we (a) have a "sharp measure" of orthogonality of two vectors $x:=(x_1,x_2)$, $y:=(y_1,y_2)$, namely their scalar product $\langle x,y\rangle:=x_1 y_1+x_2y_2=|x|\>|y|\>\cos(\theta)\ ,$ where $\theta\in[0,\pi]$ is the angle enclosed by $x$ and $y$. The "sharpness" of this measure stems from the fact that \cos'\bigl({\pi\over2}\bigr)\ne0.

Now when it comes to intersecting two given lines orthogonality is not a problem but a possible near parallelity. This means that we need (b) a "sharp measure" of linear independence of $x$ and $y$. Such a measure is provided by the determinant $x\>\wedge\>y:=x_1 y_2-x_2 y_1=|x|\>|y|\>\sin(\phi)\ ,$ where $\phi$ denotes the (signed) angle by which you have to turn the arrow $x$ into the direction of the arrow $y$. If $\phi$ is near $0$ or $\pi$ we are in a bad situation, because $x\>\wedge\>y$ appears in the denominator of the formula for the intersection point of the two lines $\ell_1: \quad t\mapsto t\> x +a\>,\qquad \ell_2: \quad s\mapsto s\> y+b\ .$ Here $x$ and $y$ are the direction vectors of $\ell_1$ and $\ell_2$, and $a$, $b$ are constant vectors in ${\mathbb R}^2$. The "sharpness" of this measure of linear independence stems from \sin'(0)=1, \sin'\bigl({\pi\over2}\bigr)=-1.