This is a simple Lagrange multiplier problem. Let's look at the gradient (where it is defined: i.e. not at the origin)
\begin{align}
\nabla_a f(a) = \mathbb{E}[k\|a+X\|^{k-2}a]
\end{align}
This (sub)gradient is never $0$ except at the origin. The solution must either be at the origin (which you can obviously eliminate) or on the boundary at a KKT point. The constraint can we phrased as $g(a) = \|a\|^2-r^2 \leq 0$. KKT conditions:
\begin{align}
\nabla_a f(a) &= \lambda \nabla g(a)\\
\mathbb{E}[k\|a+X\|^{k-2}]a &= 2\lambda a
\end{align}
Clearly this is satisfied for any $\|a\|=r$, using
\begin{align}
\lambda = \frac{1}{2}\mathbb{E}[k\|a+X\|^{k-2}]
\end{align}
Hence any point on the boundary is a solution to the maximization problem by radial symmetry, and the fact that the origin is not a solution.
Edit: In response to comments. You don't really need to prove convexity to do this problem. $f$ is not differentiable, but we can use a version of KKT for nondifferentiable functions.
It can be shown that for $k\geq1$ that this function is convex. From Proposition 8.18 of Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Bauschke and Combettes, it can be shown that so long as the 1D version of a radially symmetric function (i.e. only depends on the norm) is convex, the ND version of this function is also convex. The 1D version of $f$ is convex for $k\geq 1$, hence $f$ is convex for $k\geq 1$.
For $0< k < 1$, the function is what's called quasiconvex. See Bauschke and Combettes, Example 10.28 for this result. This is similar, in that it will be maximized at extreme points (points on the boundary). See Maximizing and minimizing quasiconvex functions: related properties, existence and optimality conditions via radial epiderivatives, Fabian Flores-Bazan Fernando Flores-Bazan Cristian Vera.