Consider the following 2-players game: the set of actions for the player 1 (P1) is $U_1$, for the player 2 (P2) is $U_2$. Utility functions are $g_1(u_1,u_2)$ and $g_2(u_1,u_2)$.
First, P1 decide about his action and say it to P2 - so P2 can make a decision using this knowledge. Whenever P1 makes an action $u_1\in U_1$ it's optimal for P2 to make a decision $u_2\in I(u_1)$ where the subset $I(u_1)\subset U_2$ is defined as $ U_2^*(u_1) = Arg\max_{u_2\in U_2}g_2(u_1,u_2). $ This fact is clear since P2 would like to maximize his utility $g_2$ provided an action $u_1$ of P1.
P1 knows that whenever he makes an action $u_1$, P2 will make an action $u_2\in U_2^*(u_1)$. If for some $u_1$ the set $I(u_1)$ contains more than one element, player P1 don't know exactly which action will perform $u_2$. Let us suppose here that P1 is considering then worst case scenario, then it's optimal for he to make the following action $ u^*_1 \in U^*_1 = Arg\max_{u_1\in U_1}\left(\min\limits_{u_2\in U_2^*(u_1)}g_1(u_1,u_2)\right). $
This is an optimal strategy for the both players. My question is the following: if for any $u^*_1\in U^*_1$ and $u^*_2\in U^*_2(u^*_1)$ the pair $(u^*_1,u^*_2)$ corresponds to the Nash equilibrium?
For the simplicity we can assume that $U_1$ and $U_2$ are finite.
P.S. I am not so experienced in the game theory, so this solution is open to critics about the solution itself and remarks on the terminology.