This is a very fun question, I hope you'll mention where it came from, or what inspired you to ask it.
The answer turns out to be $|n|_2$, the $2$-part of $n$, which is the smallest power of $2$ that divides $n$ (the power of $2$ itself, not just the exponent). For example, $|12|_2 = 4$, so in $\Bbb R^{12}$, the largest such set will have size $4$.
Let me share a bit of notation/background that helps (me at least) to be more precise. We'll let $B_n$ be the symmetry group of the $n$-dimensional cube (the hyperoctahedral group). This group acts on $\Bbb R^n$ in exactly the way you describe: each element permutes the entries of a vector $\vec{x} = (x_1, x_2, \ldots, x_n)$ and possibly multiplies each entry (independently) by $-1$. We'll denote the set of such vectors by $B_n(\vec{x})$ (sidenote: if we take the convex hull of $B_n(\vec{x})$, it's known in some circles as the type B permutohedron)
Theorem: The maximum size of a subset of $B_n(\vec{x})$ consisting of mutually orthogonal vectors is $|n|_2$, the $2$-part of $n$. In particular, when $n$ is a power of $2$, such a set forms an orthogonal basis for $\Bbb R^n$.
I don't know if this theorem appears in any literature, so I'll lay out the main ideas at the end of this answer. But first, examples!
Let us take your $n = 4$ example, but make it maximal: The rows of
$$\begin{pmatrix}
1 & 2 & 3 & 4 \\
2 & -1 & 4 & -3 \\
3 & -4 & -1 & 2 \\
4 & 3 & -2 & -1
\end{pmatrix},$$
where we list only the indices (so $-2$ really means $-x_2$) is a set of $4$ "generic" orthogonal vectors in $\Bbb R^4$. How was it obtained? I'm glad you asked!
First, start with a $2 \times 2$ Hadamard matrix (for all intents and purposes, a matrix with entries $\pm 1$ whose rows/columns are orthogonal). We'll use this one: $\pmatrix{+&+\\+&-}$. For the top quadrants, just copy it. But for the bottom left, flip it upside down, and for the bottom right, flip it upside down and reverse all the signs (it's still a Hadamard matrix, but not a "famous" one, as far as I know):
$$\begin{array}{cc|cc}
+ & + & + & + \\
+ & - & + & - \\\hline
+ & - & - & + \\
+ & + & - & -
\end{array}$$
As you can see, this determines the signs in the "generic" matrix above. But how to arrange the various $x_i$? Well, applying permutations of the form $(ij)(k \ell)$ to the first row! Luckily for us, there happen to be $3$ of these, exactly as many as we need! This seems like a good idea, because for rows $R_i$ and $R_j$ to be perpendicular, the various $x_k$ must pair up ($+$ with $-$) in all the dot products.
So our generic matrix is really a fusion of two matrices:
$$\begin{array}{cc|cc}
1 & 2 & 3 & 4 \\
2 & 1 & 4 & 3 \\\hline
3 & 4 & 1 & 2 \\
4 & 3 & 2 & 1
\end{array} \qquad \text{and} \qquad
\begin{array}{cc|cc}
+ & + & + & + \\
+ & - & + & - \\\hline
+ & - & - & + \\
+ & + & - & -
\end{array}
$$
I bet we can make a bigger one, in $\Bbb R^8$!
$$\begin{array}{cccc|cccc}
1&2&3&4 & 5&6&7&8 \\
2&1&4&3 & 6&5&8&7 \\
3&4&1&2 & 7&8&5&6 \\
4&3&2&1 & 8&7&6&5 \\ \hline
%
5&6&7&8 & 1&2&3&4 \\
6&5&8&7 & 2&1&4&3 \\
7&8&5&6 & 3&4&1&2 \\
8&7&6&5 & 4&3&2&1 \\
\end{array} \quad \text{and} \quad
\begin{array}{cccc|cccc}
+&+&+&+ & +&+&+&+ \\
+&-&+&- & +&-&+&- \\
+&+&-&- & +&+&-&- \\
+&-&-&+ & +&-&-&+ \\ \hline
%
+&-&-&+ & -&+&+&- \\
+&+&-&- & -&-&+&+ \\
+&-&+&- & -&+&-&+ \\
+&+&+&+ & -&-&-&-
\end{array}$$
(I've checked both of the above very carefully!) Let's look at $\Bbb R^{12}$ for one final example. I'll just write down the indices.
$$\begin{array}{cccc|cccc|cccc}
1&2&3&4 & 5&6&7&8 &9 &10&11&12\\
2&1&4&3 & 6&5&8&7 &10&9 &12&11\\
3&4&1&2 & 7&8&5&6 &11&12&9 &10\\
4&3&2&1 & 8&7&6&5 &12&11&10& 9
\end{array}$$
For the signs, just sit three of the $4 \times 4$ matrices we used in $\Bbb R^4$ side by side.
It's kind of annoying to sit down and write down all the details. It's not too hard to show that such a set of size $|n|_2$ is possible (they work like the examples above), but I haven't found an appealing way to package it all. The hard part is really showing that $|n|_2$ is the largest set possible. Here are the main ingredients (without proof).
I used to think the signs were easy enough to take care of; Sylvester's construction guarantees the existence of $2^n \times 2^n$ Hadamard matrices. But as the OP pointed out in a comment, in the $R^{2^n}$ case, just using Sylvester's matrices isn't enough. Instead, we let $M_n$ be the usual $2^n \times 2^n$ Hadamard matrix given by Sylvester's construction (so $M_{n+1} = \begin{array}{c|c}M_n & M_n\\\hline M_n & -M_n\end{array}$ is defined recursively). Then for our signs, we use the matrix
$$S_n =
\begin{array}{c|c}
M_{n-1} & M_{n-1} \\\hline
W_{n-1} & -W_{n-1}
\end{array}
$$
where $W_n$ is the matrix obtained by flipping $M_n$ upside down (that is, swapping rows $k$ and $2^n - k$). I'm still working on a condensed explanation for why this works.
The indices in a given row are permutations of $\{1, 2, \ldots, n\}$ -- but not just any old permutations. Because the various $x_i$ need to pair up, each permutation needs to have no fixed points and be a product of disjoint $2$-cycles (in the $4 \times 4$ example, the index permutations are exactly the normal Klein four subgroup of $A_4$).
Not only are the (signless) rows permutations of $[n]$, a much more significant phenomenon occurs: If $S = \{\vec{x}, \pi_1(\vec{x}), \pi_2(\vec{x})\} \subseteq B_n(\vec{x})$ (with appropriate signs) is a set of mutually orthogonal vectors, then $(\pi_1 \circ \pi_2)(\vec{x})$ (again, with appropriate signs) is orthogonal to everything in $S$. The consequence here is that the index permutations, for a maximal set, must in fact be a group of size at most $n$ (it's closed under composition and we can't have more than $n$ mutually orthogonal vectors). Since each permutation is a product of disjoint $2$-cycles, they all have order $2$: the group of permutations must have some order that's a power of $2$, hence has order at most $|n|_2$.
Interestingly enough, if you pay attention to the (signless) permutations only, you get a nice elementary abelian $2$-group (so something commutative). But if you think of rows as elements of $B_n$, most rows have order $4$, and most rows anticommute; $\pi_i \circ \pi_j = - \pi_j \circ \pi_i$.