It can be easily shown (using the law of total probability)* that $$ \frac{1}{{2\pi \sigma ^2 }}\int_{ - \infty }^\infty {\int_{ y-d }^{y+d} {\exp \bigg( - \frac{{x^2 }}{{2\sigma ^2 }}\bigg)\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg) {\rm d}x} \,{\rm d}y} = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg), $$ where $\Phi$ is the distribution function of the ${\rm N}(0,1)$ distribution. Noting that the right-hand side is maximized when $\mu = 0$ (indeed, consider the integral of the ${\rm N}(0,1)$ pdf over the fixed length interval $[\frac{{\mu - d}}{{\sqrt {2\sigma ^2 } }},\frac{{\mu + d}}{{\sqrt {2\sigma ^2 } }}]$), it follows that a necessary condition for your inequality to hold is $$ \Phi \bigg(\frac{{d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{-d}}{{\sqrt {2\sigma^ 2} }}\bigg) > 1 - p. $$ On the other hand, if this condition is satisfied, then your inequality holds with $\mu=0$.
To summarize: The inequality holds for some $\mu \in \mathbb{R}$ if and only if it holds for $\mu=0$; the inequality for $\mu = 0$ is equivalent to $$ \Phi \bigg(\frac{{d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{-d}}{{\sqrt {2\sigma^ 2} }}\bigg) > 1 - p. $$
EDIT (in view of your comment below Sasha's answer): Assume that the necessary condition above is satisfied. The function $f$ defined by $$ f(\mu ) = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg) $$ is decreasing in $\mu \in [0,\infty)$, with $f(\mu) \to 0$ as $\mu \to \infty$. By our assumption, $f(0) > 1-p$. So if you are interested in a $\mu > 0$ such that $f(\mu) \approx 1-p$, you need to find $\mu_1,\mu_2 > 0$ such that $f(\mu_1) > 1- p$ and $f(\mu_2) < 1-p$, and $f(\mu_1) - f(\mu_2) \approx 0$. Then, for any $\mu \in (\mu_1,\mu_2)$, $f(\mu) \approx 1-p$.
* EDIT: Derivation of the first equation above. Denote the left-hand side of that equation by $I$. First write $I$ as $$ I = \int_{ - \infty }^\infty {\bigg[\int_{y - d}^{y + d} {\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{x^2 }}{{2\sigma ^2 }}\bigg){\rm d}x} \bigg]\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg){\rm d}y} . $$ Then $$ I = \int_{ - \infty }^\infty {{\rm P}( - d \le X - y \le d)\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg){\rm d}y} , $$ where $X$ is a ${\rm N}(0,\sigma^2)$ random variable. If $Y$ is a ${\rm N}(\mu,\sigma^2)$ random variable independent of $X$, then, by the law of total probability, $$ {\rm P}( - d \le X - Y \le d) = \int_{ - \infty }^\infty {{\rm P}( - d \le X - Y \le d|Y = y)f_Y (y)\,{\rm d}y} = I, $$ where $f_Y$ is the pdf of $Y$, given by $$ f_Y (y) = \frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg), $$ and where for the last equality ($\int_{ - \infty }^\infty \cdot =I$) we also used the independence of $X$ and $Y$. Now, $X-Y \sim {\rm N}(-\mu,2\sigma^2)$; hence $$ \frac{{(X - Y) - ( - \mu )}}{{\sqrt {2\sigma ^2 } }} \sim {\rm N}(0,1), $$ and, in turn, $$ I = {\rm P}\bigg(\frac{{ - d - ( - \mu )}}{{\sqrt {2\sigma ^2 } }} \le Z \le \frac{{d - ( - \mu )}}{{\sqrt {2\sigma ^2 } }}\bigg) = {\rm P}\bigg(\frac{{\mu - d}}{{\sqrt {2\sigma ^2 } }} \le Z \le \frac{{\mu + d}}{{\sqrt {2\sigma ^2 } }}\bigg), $$ where $Z \sim {\rm N}(0,1)$. Thus, finally, $$ I = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg). $$