(1) Your calculation is fine. This should be easier to read with summation notation:
$$ \begin{array}{ll} \displaystyle f(xy) & \displaystyle = f\left(\left(\sum_{g\in G} a_g g\right)\left(\sum_{h\in G}b_hh\right)\right) \\ & \displaystyle =f\left(\sum_{g,h\in G}a_g b_h gh\right) \\ & \displaystyle =f\left(\sum_{\sigma\in G}\left(\sum_{gh=\sigma }a_gb_h\right)\sigma\right) \\ & \displaystyle =\sum_{\sigma\in G}\left(\sum_{gh=\sigma} a_gb_h\right) \\ & \displaystyle = \left(\sum_{g\in G}a_g\right)\left(\sum_{h\in G}b_h\right) \\ & \displaystyle = f\left(\sum_{g\in G}a_gg\right)f\left(\sum_{h\in G}b_hh\right) \\ & =f(x)f(y). \end{array}$$
Notice there's no reason to enumerate the elements of $G$, we can actually use the elements of $G$ to index a set of coefficients $\{a_g:g\in G\}$ (so $a_g$ is the coefficient of $g$). We can use the distributive property and collect like terms as usual. When $g,h\in G$ appears under the summation, that means we are summing over all possible values for $g$ and $h$ in $G$. In order to collect like terms, we must consider all pairs $g,h$ for which $gh$ is some given element $\sigma\in G$, so all of the coefficients $a_gb_h$ with $gh=\sigma$ will combine together to become the coefficient of $\sigma$.
In the future, there is actually a quicker way to check this:
Lemma ("extending linearly"). Suppose $R$ is a ring, $S$ is a central subring (meaning a subring for which $sr=rs$ for all $s\in S$ and $r\in R$), and every element of $R$ is uniquely expressible as a sum of terms of the form $sx$ where $s\in S$ and $x\in X$ for some "basis" set $X$. If $T$ is another ring with $S$ as a central subring, and $f:R\to T$ a function which is a homomorphism of their underlying additive groups (i.e. preserves addition), then $f$ is a ring homomorphism if and only if $f(xy)=f(x)f(y)$ for all $x,y\in X$ and $f(sx)=sf(x)$ for all $s\in S,x\in X$.
Proving this would be a nice exercise. It's easier to prove if $X$ is closed under multiplication, but that hypothesis isn't necessary for the conclusion to follow.
Here's how it applies: $\mathbb{Z}$ is a central subring of the integral group ring $\mathbb{Z}[G]$, the group $G$ is a "$\mathbb{Z}$-basis" for $\mathbb{Z}[G]$ (meaning every element of $\mathbb{Z}[G]$ is uniquely expressible as a $\mathbb{Z}$-linear combination of group elements $g\in G$), $\mathbb{Z}$ is a central subring of $\mathbb{Z}$ too, the map we've defined $\mathbb{Z}[G]\to\mathbb{Z}$ preserves addition, so it suffices to verify $f(gh)=f(g)f(h)$ for all $g,h\in G$, or in other words $1=1\cdot 1$, and $f(ng)=nf(g)$ for all $n\in\mathbb{Z},g\in G$, or in other words $n=n\cdot 1$.
Sometimes authors will even define a ring homomorphism $f:R\to T$ without saying where $f$ sends every element of $R$, instead they'll just say where $f$ sends elements of $X$, and then we "extend linearly" to define a function on all of $R$ via $f(\sum sx):=\sum sf(x)$.
This can be used to, for example, take any group homomorphism $G\to H$ and extend linearly to a ring homomorphism $\mathbb{Z}[G]\to\mathbb{Z}[H]$.
(2) This is a classic ring homomorphism, called the Frobenius map. Call it $\wp(u)=u^p$. It will be a ring endomorphism in any commutative ring of characteristic $p$, and in particular will be a field automorphism of any finite field of characteristic $p$.
Multiplicativity is obvious: $\wp(xy)=(xy)^p=x^py^p=\wp(x)\wp(y)$. Checking additivity requires the binomial theorem, $\wp(x+y)=(x+y)^p=\sum_{k=0}^p \binom{p}{k} x^{p-k}y^k$. The only indices $k$ for which the binomial coefficient $\binom{p}{k}$ is not a multiple of $p$ (so, equal to $0$ in $\mathbb{Z}_p$) is $k=0,p$, in which case this simplifies to $x^py^0+x^0y^p=x^p+y^p=\wp(x)+\wp(p)$. (The binomial theorem works in other rings too, because binomial coefficients are whole numbers, and multiplying by whole numbers is equivalent to just repeated addition.)
In general, the Frobenius map is not surjective. For example, in the polynomial ring $\mathbb{F}_p[T]$ (where now $T$ is a variable), $\wp$ sends any polynomial to another polynomial with only powers of $T$ that are a multiple of $p$. Indeed we have $\wp(f(T))=f(\wp(T))=f(T^p)$ for any $f\in\mathbb{Z}_p[T]$, so the image $\wp(\mathbb{Z}_p[T])=\mathbb{Z}_p[T^p]$ does not contain, for instance, $T$ itself.
Nor, in general, is the Frobenius map injective. For example, in the "dual ring" $\mathbb{F}_p[\varepsilon]/(\varepsilon^p)$, we have the equality $\wp(0)=\wp(\varepsilon)$ even though $0\ne\varepsilon$. (I am abusively calling $\varepsilon+(\varepsilon^p)$ just $\varepsilon$.)
However, for rings of characteristic $p$ with no nilpotent elements (nonzero elements that can be powered to $0$, like $\varepsilon$ in my previous example), such as finite fields including $\mathbb{Z}_p$, it is easy to verify the map is injective. A ring homomorphism is injective if and only if it has trivial kernel. What is the kernel of $\wp(u)=u^p$? Well, $u^p=0$ implies $u=0$ if there are no nonzero elements that power to $0$, so the kernel is trivial, $(0)$.
Surjectivity of $\wp$ for finite fields follows from injectivity. (If $Z$ is any finite set and $f:Z\to Z$ is an injective function, then $f$ is a bijection.)
For $\mathbb{Z}_p$, the Frobenius map $\wp$ is actually very simple: Fermat's little theorem says that $\wp(x)=x$ for all $x\in\mathbb{Z}_p$, so it is literally just the identity function.
(3) The map $\mathbb{Z}[\mathbb{Z}_n]\to\mathbb{Z}[x]$ given extending $[k]\mapsto x^k$ linearly is actually not by itself well-defined, meaning that calling it a map at all is a lie, since we can have $[k]=[r]$ but $x^k\ne x^r$ and so it doesn't actually tell us which of $x^k$ or $x^r$ to send the element $[k]$ to. This is why it's important that in the directions, it specifies $[k]\mapsto x^k$ specifically for integers $0\le k\le n-1$. Then it's just
$$ f(c_0[0]+c_1[1]+\cdots+c_{n-1}[n-1])=c_0x^0+c_1x^1+\cdots+c_{n-1}x^{n-1}. $$
Is this a ring homomorphism? No. It does not preserve multiplication. As in the lemma I gave, it suffices to check that $f([k][r])=f([k])f([r])$ for all $0\le k,r\le n-1$.
Inside $\mathbb{Z}[\mathbb{Z}_n]$, all of the elements $[k]$ are just powers of $[1]$ (i.e. $[k]=[1]^k$), and these powers eventually "cycle" back around to the multiplicative identity $[0]$. So, $[1]$ is a "root of unity." That is, the powers $[1],[1]^2,[1]^3,\cdots$ or in other words $[1],[2],[3],\cdots$ eventually goes $[n-2],[n-1],[0]$. However, this doesn't happen with powers of $x^1$ inside $\mathbb{Z}[x]$, the sequence of powers $x^1,x^2,x^3,\cdots$ just keeps going on forever without ever cycling back around to $x^0$. Therefore, for example,
$$ \begin{array}{l} f([n-1][1])=f([0])=x^0 \\ f([n-1])f([1])=x^{n-1}x^1=x^n \end{array}$$
but $x^0\ne x^n$, so $f([n-1][1])\ne f([n-1])f([1])$, meaning $f$ does not preserve multiplication.
Indeed, the image of $f$ is not a subring of $\mathbb{Z}[x]$. The image is the $\mathbb{Z}$-span of $\{x^0,x^1,\cdots,x^{n-1}\}$, i.e. all polynomials with degree less than $n$, and this subset is not closed under multiplcation so the image cannot be a subring. If a function's image is not a subring, the function itself cannot be a ring homomorphism.