The following problem is given at the level of a senior undergrad analysis course:
We are given a continuous function $g:\mathbb{R}\rightarrow \mathbb{R}$. Assume that $\mathbb{R}$ contains a countably infinite subset $G$ such that:$\int_{a}^{b}g(x)dx=0$ if $a$ and $b$ are not in $G$. Prove that $g$ is the zero function.
My attempt: Without loss of generality, I assume that my function has only one positive part and one negative part over $\mathbb{R}$. I broke my function $g(x)$ into two functions: $g_{+}(x)$ which is equal to $0$ when $g(x)$ is negative and equals $g(x)$ when $g(x)$ is positive. Similarly, I define $g_{-}(x)$ to be zero when $g(x)$ is positive and equals $g(x)$ when $g(x)$ is negative. Now, I call $A_{+}$ the subset of $\mathbb{R}$ where $g(x)$ is positive, and $A_{-}$ the subset of $\mathbb{R}$ where $g(x)$ is negative. Then, I am trying to prove that for any $a< b$ in $A_{+}$: $\int_{a}^{b}g_{+}(x)dx=0$ (This obvious if $a$ and $b$ are not in $G$, but the problem is when one or both of them is/are not in $G$) and since $g_{+}(x)\geq 0$ for any $x\in \left [ a,b \right ]$, then $g_{+}(x)=0 $ for all $x\in \left [ a,b \right ]$. The same thing applies to the negative part. So, I am stuck at this point.
Can anyone tell how to move forward and solve the problem? Also, if anyone has an easier way to solve the problem, please share.