Ok, so let us put the equation in the one line: $ f(x) = 1_{A^c}(x)+1_{A^c}(x)\sum\limits_{y\in \Omega}P(x,y)f(y).\quad (1) $ Here $1_{A^c}(x)$ is an indicator function of $A^c = \Omega\setminus A$. I.e. $1_{A^c}(x) = 1$ for $x\in A^c$ and $1_{A^c}(x) = 0$ for $x\in A$. Let us consider now the homogeneous version of the very same equation (which I happened to investigate in my first post): $ g(x) = 1_{A^c}(x)\sum\limits_{y\in \Omega}P(x,y)g(y). \quad(2) $
Note again that the equation is linear so the solutions of these equations built the linear space. Of course, $g^* = 0$ is a solution of (2). In the simplest case when $A^c$ is finite, an equation (2) has a unique bounded solution iff $A^c$ does not have absorbing subsets since $g(x)$ is a probability that $x$ will never leave $A^c$.
Of course it's iff the determinant of P' which is build only by rows and columns from $A^c$ is non-zero. This led me to the idea that non-uniqueness of (1) in this case depends only on the absorbing subsets of $A$. About existence of solution for (1) we 'may not care' since there is at least one function which admits this equation, namely $f(x)=\mathsf E_x[T_A]$. On the other hand, such function takes values from the extended real line for which case uniqueness of the solution is unclear for me.
Consider again the previous example. Let $P$ be a unit matrix of dimension two, so $p(1,1) = 1$, $p(1,2) = 0$ and $p(2,1) = 0, p(2,2) = 1$. Let us solve it for $A = \{1\}$. If you write your equation for this case you will have $f(1) = 0$ and $f(2) = 1+f(2)$, so the solution of the latter equation is either $-\infty$ or $\infty$ though I am not so experienced in solution of such equations. Well, you can say that the solution is unique if you are looking only for non-negative solutions.
The point is the following: theory of uniqueness for such equations based on fact that solutions build up the vector space. Hence, for any solution $f^*$ of (1) and non-zero solution $g^*$ of (2) you can claim that $f^*+\alpha g^*$ is a solution of (1) which is different than $f^*$. On the other hand, if $f^*$ take infinite values then $f^*+\alpha g^*$ can coincide (at least in symbols) with $f^*$.
Let us consider arbitrary $A^c$. What does mean that (2) has a non-zero solution? The invariance probability is the maximal solution of (2) which lies in $[0,1]$. Let us denote it by $g^*$. So, for each point in which $g^*>0$ it holds that $\mathsf E_x[T_A] = \infty$ (is it clear?). For such points, $f^* = \infty$ and hence $f^*(x)+\alpha g^*(x) =\infty = f^*(x)$, though this statement is quite informal for me.
I cannot give you a complete formal answer to your question since I was always looking only for bounded solutions of such equations. The latter paragraph gives a guess that the solution is indeed unique since non-uniqueness of solution of (2) will appear only in points where $f^*$ is infinite and hence will not influence $f^*$. On the other hand, I didn't consider the case when (2) may have unbounded solution. E.g. by Lebesgue's convention $0\cdot\infty = 0$ so one can say that $g = 1_{A^c}\cdot\infty$ admits (2), if for any $x\in A^c$ there is a transition to $A^c$, as well as $g=0$.
Anyway, I think you need a person more experienced in solving linear equations with unbounded solutions. the other advise is the following - if you want to calculate the solution, you will certainly need to cut-off points in which $\mathsf E_x[T_A] = \infty$. So you will have something like $ f(x) = \begin{cases}0,&\text{ if }x\in A, \\ \infty,&\text{ if }x\in B \\ 1+\sum\limits_{y\in\Omega}P(x,y)f(y), &\text{ if }x\in (A\cup B)^c \end{cases} $
I haven't deal with the problem of the average hitting time, so maybe there are smarter ways to solve it. Regards.