a) Consider a nested sequence $J_1 \subset J_2 \subset \cdots$ of subsets of $I$. (For example, let $J_1$ be a single point, and keep adding points one by one to form the $J_n$.) If it is impossible to construct such a sequence with each inclusion being strict ($\subsetneq$), then that means $I$ is finite, and we are finished. Otherwise, we may assume each inclusion is strict.
Then $\left(\sum_{i \in J_n} a(i)\right)_{n \ge 1}$ is an increasing sequence that is upper bounded by the finite number $\sup_{J \subset I, J \text{ finite}} \sum_{i \in J} a(i)$, so the monotone convergence theorem implies that $\lim_{n \to \infty} \sum_{i \in J_n} a(i) = \sup_{J \subset I, J \text{ finite}} \sum_{i\in J} a(i)$. Why does this imply $I = \bigcup_{n \ge 1} J_n$?
b)
One relevant result that is useful is that absolute convergence of $\sum_{q \in \mathbb{Q}} a(q)$ (and any sub-series) implies that the series is unchanged by rearrangements; we may take the sum in any order.
Suppose $x \in \mathbb{Q}$. Then for any $\delta>0$,
\begin{align}
|f(x)-f(x-\delta)|
&= f(x)-f(x-\delta)\\
&= \sum_{q \in \mathbb{Q}, q \le x} a(q) - \sum_{q \in \mathbb{Q}, q \le x-\delta} a(q)\\
&= a(x) + \sum_{q \in \mathbb{Q}, q < x} a(q) - \sum_{q \in \mathbb{Q}, q \le x-\delta} a(q)\\
& \ge a(x) > 0,
\end{align}
so $f(x-\delta) \not\to f(x)$ as $\delta \to 0$.
If $x \notin \mathbb{Q}$, then $\sum_{q \in \mathbb{Q}, q \le x} a(q) = \sum_{q \in \mathbb{Q}, q < x} a(q)$ so the above issue goes away.
For a sequence $\delta_n \downarrow 0$, you can show $f(x-\delta_n)$ is increasing with supremum $\sum_{q \in \mathbb{Q}, q < x} a(q)$ and apply the monotone convergence theorem again.