The geometric multiplicity of $\lambda$ tells you how big a subspace of $V$ you can find where $T$ acts simply as "multiplication by $\lambda$" (that is, how big, dimensionally speaking, the subspace spanned by the eigenvectors of $\lambda$ is).
The analytic/algebraic multiplicity of $\lambda$ tells you how big that space "should" be for $V$ and $T$ to have a "nice" decomposition of the following kind: you can express $V$ as a direct sum of subspaces $E_{\lambda_1}$, $E_{\lambda_2},\ldots,E_{\lambda_k}$ (where $\lambda_1,\ldots,\lambda_k$ are the distinct roots of the characteristic polynomial) so that on each $E_{\lambda_i}$, $T$ acts just by "multiplication by $\lambda_i$". For $V$ to really be equal to the sum of these spaces, you need $\dim(E_{\lambda_i})$, which is the geometric multiplicity of $\lambda_i$, to equal the algebraic multiplicity of $\lambda_i$.
It has other properties, but I think that's a good place to start.
Yes, the geometric multiplicity is the largest possible number of linearly independent eigenvectors of $T$ associated to $\lambda$ (vectors $\mathbf{v}$, $\mathbf{v}\neq\mathbf{0}$, such that $T(\mathbf{v}) = \lambda\mathbf{v}$; that is, vectors on which $T$ acts just by "multiplication by $\lambda$).
No, not all eigenvalues have the same geometric multiplicity; for example, in the matrix $\left(\begin{array}{cccc} 2 & 1 & 0 & 0\\\ 0 & 2 & 0 & 0\\\ 0 & 0 & 2 & 0\\\ 0 & 0 & 0 & 1 \end{array}\right),$ the characteristic polynomial is $(2-\lambda)^3(1-\lambda)$, so the two eigenvalues are $\lambda_1=1$ and $\lambda_2=2$. The eigenvalue $\lambda_1=1$ has algebraic and geometric multiplicities both equal to $1$; $\lambda_2=2$ has algebraic multiplicity $3$, and geometric multiplicity $2$. (You can check the geometric multiplicity by finding the nullity of $A-2I$).
Added. Since the geometric multiplicity of an eigenvalue $\lambda_i$ is the dimension of the subspace $E_{\lambda_i}$, your first task in finding that dimension is to identify the vectors $\mathbf{v}$ for which $T(\mathbf{v})=\lambda_i\mathbf{v}$. This is equivalent to finding the vectors for which $(T-\lambda_i I)(\mathbf{v})=\mathbf{0}$. The reason this is a better problem to tackle is that it is easier to solve a system that looks like $B\mathbf{v}=\mathbf{0}$, than one that looks like $A\mathbf{v}=\lambda\mathbf{v}$.
So, you find the nullspace of $T-\lambda_iI$, that is, the collection of all vectors $\mathbf{v}$ for which $(T-\lambda_iI)(\mathbf{v})=\mathbf{0}$. Its dimension is precisely the geometric multiplicity of $\lambda_i$, so the geometric multiplicity of $\lambda_i$ is found by computing $\mathrm{nullity}(T-\lambda_iI)$.
For your specific example: we begin with the matrix corresponding to the standard basis: $A =\left(\begin{array}{cccc} 1 & 1 & \cdots & 1\\\ 0 & 1 & \cdots & 1\\\ \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & \cdots & 1 \end{array}\right).$ The characteristic polynomial is $\det(A-tI) = (1-t)^n$, so the only eigenvalue is $\lambda=1$, with algebraic multiplicity $n$.
To find the geometric multiplicity, take $A-1I$ ("$1I$" because we are taking $\lambda = 1$), and find its nullspace. Since $A - I = \left(\begin{array}{ccccc} 0 & 1 & 1 &\cdots & 1\\\ 0 & 0 & 1 & \cdots & 1\\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & 0 & \cdots & 0 \end{array}\right),$ finding the reduced row-echelon form will give you the solutions to $(A-I)\mathbf{x}=\mathbf{0}$. The reduced row-echelon form of $A-I$ is $\left(\begin{array}{ccccc} 0 & 1 & 0 & \cdots & 0\\\ 0 & 0 & 1 & \cdots & 0\\\ \vdots & \vdots & \vdots & \ddots & \vdots\\\ 0 & 0 & 0 & \cdots & 1\\\ 0 & 0 & 0 & \cdots & 0 \end{array}\right),$ so $\mathbf{x}=(x_1,x_2,\ldots,x_n)$ is in the nullspace if and only if $x_2=x_3=\cdots=x_n=0$. So the eigenvectors of $\lambda=1$ are all vectors of the form $(a,0,0,\ldots,0)$ for arbitrary $a$; however, because $\mathbf{0}$ is always a solution to $A\mathbf{x}=\lambda \mathbf{x}$ for any $\lambda$, we declare by fiat that an eigenvector has to be nonzero (this has no bearing on the geometric multiplicity of $\lambda$, because $\mathbf{0}$ can never be in a linearly independent set). A basis for this nullspace is given by $(1,0,\ldots,0)$, so the nullspace has dimension $1$. This dimension is the geometric multiplicity of $\lambda=1$.
So, in summary: $\lambda=1$ is the only eigenvalue; it has algebraic multiplicity $n$, and geometric multiplicity $1$. The eigenvectors are all nonzero multiples of $(1,0,0,\ldots,0)$.