Yes, your guess is correct, but we can say a lot more than that. If $\sigma,\mu$ are continuous, $X_0=x$ and $\sigma > 0$ on some connected neighbourhood $U$ of $x$, then $\mathbb{P}(X_t\in V) > 0$ for all positive times $t$ and nonempty open sets $V\subseteq U$. We can go even further than this though. If $\gamma\colon[0,t^\prime]\to\mathbb{R}$ is continuous with $\gamma(0)=x$ and $\sigma(\gamma(t)) > 0$, then $\mathbb{P}(\sup_{t\le t^\prime}\vert X_t-\gamma(t)\vert < \epsilon) > 0$ for all positive $\epsilon$. Stated another way, the support of the paths of $X$ (over a finite time interval) contains all continuous paths starting from $x$ along which $\sigma$ is positive. These statements also hold in the more general case of diffussions in $\mathbb{R}^n$.
In fact, it is not necessary to assume that $X$ is a diffusion at all, only that it can be expressed as a stochastic integral. That is, $\sigma,\mu$ do not have to be specified as functions of $X$. In the $n$-dimensional case, we can write $ dX^i=\sum_{j=1}^m\sigma^{ij}_t\,dW^j_t+\mu^i_t\,dt. $ Here, $X=(X^1,\ldots,X^n)$ is an $n$-dimensional process with $X_0=x\in\mathbb{R}^n$ and $W=(W^1,\ldots,W^m)$ is an $m$-dimensional Brownian motion. You can consider $\sigma^{ij}_t$ and $\mu^i_t$ to be functions of $X_t$ if you like, but that is not necessary. All that matters is that they are predictable processes (which includes all continuous and adapted processes).
First, supposing that $\mu,\sigma$ satisfy some boundedness conditions whenever $X$ is close to $x$, then there is a positive probability of $X$ remaining arbitrarily close to $x$. I will use $\Vert\cdot\Vert$ to denote the Euclidean norms on $\mathbb{R}^n$ and on the $n\times n$ matrices.
1) Suppose there exists $K > 0$ such that $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert\le K$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert\le K$ and $\Vert\sigma_t\sigma^T_t\Vert\le K$ whenever $\Vert X_t - x\Vert < \delta$ (some positive $\delta$). Then, $\mathbb{P}(\sup_{t\le t^\prime}\Vert X_t-x\Vert < \epsilon) > 0$ for all positive $\epsilon$.
In the one dimensional case, we need only suppose that $\sigma^{-2}\mu\le K$ and $\sigma^2\le K$ (there is no need to assume that $\sigma$ is bounded away from zero). I'll prove (1) in a moment. First, it has the following consequence.
2) Let $\gamma\colon[0,t^\prime]\to\mathbb{R}^n$ be continuous such that $\gamma(0)=x$ and there is $K > 0$ with $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert\le K$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert\le K$ and $\Vert\sigma_t\sigma^T_t\Vert\le K$ whenever $\Vert X_t-\gamma(t)\Vert < \delta$ (some positive $\delta$). Then, $\mathbb{P}(\sup_{t\le t^\prime}\Vert X_t-\gamma(t)\Vert < \epsilon) > 0$ for all positive $\epsilon$.
In particular, the conditions are satisfied if $\sigma_t=\sigma(X_t),\mu_t=\mu(X_t)$ are continuous functions of $X_t$ and $\sigma(\gamma_t)\sigma(\gamma_t)^T$ is nonsingular, implying the statements in the first paragraph of this post.
To see that (2) follows from (1), consider the case where $\gamma$ is continuously differentiable (this is enough, as all continuous functions can be uniformly approximated by smooth ones). Supposing that the requirements of (2) are met, look at $\tilde X_t=X_t-\gamma(t)$. This satisfies the requirements of (1) with $\mu$ replaced by $\mu-\gamma^\prime$. So, by (1), $\tilde X$ has positive probability of remaining arbitrarily close to 0, and $X$ has positive probability of remaining arbitrarily close to $\gamma$.
Now, let's prove (1). We can suppose that $\epsilon < \delta$ and, by stopping $X$ as soon as $\Vert X-x\Vert$ hits $\delta$, we can suppose that $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert$ and $\Vert\sigma_t\sigma_t^T\Vert$ are always bounded by $K$. Then, there is a predictable process $\nu$ with $\Vert\nu\Vert\le K$ and $\mu_t=\sigma_t\sigma_t^T\nu_t$. Define a new measure $\mathbb{Q}$ by the Girsanov transform $ \frac{d\mathbb{Q}}{d\mathbb{P}}=\exp\left(-\sum_{j=1}^m\int_0^{t^\prime}(\sigma^T_t\nu_t)^j\,dW^j_t-\frac12\int_0^{t^\prime}\nu_t^T\sigma_t\sigma^T_t\nu_t\,dt\right) $ This is an equivalent measure to $\mathbb{P}$ and, by the theory of Girsanov transforms, $\tilde W_t=W_t+\int_0^t\sigma^T_s\nu_s\,ds$ is a $\mathbb{Q}$-Brownian motion. As we have $ dX^i_t=\sum_{j=1}^m\sigma^{ij}_t\,d\tilde W^j_t $ this reduces the problem to the case where $\mu$ is zero. So, let's suppose that that $\mu=0$ from now on.
In the one dimensional case, where $dX_t=\sigma_t\,dW_t$, it is enough to suppose that $\sigma_t^2\le K$. This is because a stochastic time change can be used to write the local martingale $X$ as $X_t=x+B_{A_t}$ where $B$ is a Brownian motion with respect to its natural filtration and $A_t=\int_0^t\sigma_s^2\,ds\le Kt$. Then, $\sup_{t\le t^\prime}\vert X_t-x\vert < \epsilon$ whenever $\sup_{t\le Kt^\prime}\vert B_t\vert < \epsilon$. However, standard Brownian has nonzero probability of remaining within a positive distance $\epsilon$ of the origin (see the answers to this math.SE question), so $\{\sup_{t\le t^\prime}\vert X_t-x\vert < \epsilon\}$ has positive probability.
In the multidimensional case we need to also assume that $\Vert(\sigma\sigma^T)^{-1}\Vert\le K$. We can then reduce to the one-dimensional case. Setting $Y=\Vert X-x\Vert^2$, integration by parts gives $ \begin{align} dY_t&=\tilde\sigma_t\,d\tilde W+\tilde\mu_t\,dt,\\ \tilde\sigma_t&=2\sqrt{(X_t-x)^T\sigma_t\sigma_t^T(X_t-x)},\\ \tilde\mu_t&={\rm Tr}(\sigma_t\sigma^T_t),\\ \tilde W_t&=2\sum_{i,j}\int_0^t1_{\{X_s\not=x\}}\tilde\sigma_s^{-1}(X^i_s-x^i)\sigma^{ij}_s\,dW^j_s. \end{align} $ You can check that $\tilde W$ has quadratic variation $[\tilde W]_t=t$ so that, by Lévy's characterization of Brownian motion, $\tilde W$ is a standard Brownian motion. Then, $\tilde\sigma_t$, $\tilde\sigma_t^{-1}$ and $\tilde\mu_t$ are bounded for $Y_t$ lying in any given compact subset of $(0,\infty)$. If we let $\tau$ be the first time at which $Y$ hits $\epsilon^2/2$ then, applying (1) in the one-dimensional case to $Y_{\tau+t}$, there is a positive probability that $\sup_{t\le t^\prime}\vert Y_{\tau+t}-\epsilon^2/2\vert < \epsilon^2/2$. However, in this case, we have $\sup_{t\le t^\prime}\Vert X_t-x\Vert < \epsilon$.
Finally, in the $n$-dimensional case, I'll give an example to show that $X$ need not have a positive probability of remaining close to its starting point, even when $\mu=0$ and $\sigma$ is bounded. Consider the two dimensional case, $n=2$, with $m=1$ and $\sigma_t=R\hat X_t$, where $R$ is the linear map giving a rotation by 90 degrees and $\hat x\equiv1_{\{x\not=0\}}x/\Vert x\Vert$. So, $ dX_t=R\hat X_t\,dW_t $ for a Brownian motion $W$. Then, $X^T\,dX=0$ and integration by parts shows that if $X_0\not=0$ then $\Vert X\Vert$ increases deterministically, $ \begin{align} \Vert X_t\Vert^2&=\Vert X_0\Vert^2+\sum_{i=1}^2[X^i]_t=\Vert X_0\Vert^2+\int_0^t\hat X_s^TR^TR\hat X_s\,ds\\ &=\Vert X_0\Vert^2+t. \end{align} $