From Sample 1, you have $X_1 = 7$ failures among $n_1 = 11$ items, and
from Sample 2, you have $X_2 = 8$ failures among $n_2 = 18$ items.
You wish to test the null hypothesis $H_0: p_1 = p_2$ against the one-sided
alternative $H_a: p_1 > p_2$ at the 5% level of significance.
While it is true that estimated failure fraction $\hat p_1 = X_1/n_1 = 0.6364$
from the first sample is greater than the the estimated failure fraction
$\hat p_2 = 0.4444$ from the second sample, your question is whether the first estimate is significantly larger in a statistical sense.
Because you are doing a one-sided test, this might be
decided by looking at a one-sided 95% confidence interval, but not
directly by looking at a two-sided confidence interval. Maybe it is better for you to do a one-sided test.
The test statistic is
$$Z = \frac{\hat p_1 - \hat p_2}{\sqrt{\hat p(1- \hat p)/n}},$$
were $n = n_1 + n_2$ and $\hat p = (X_1 + X_2)/n.$
You will reject $H_0$ at the 5% level, if $Z > 1.645.$
[You can
find this formula in statistics texts and online at in the NIST Engineering
Handbook.]
Here is printout from Minitab 17 statistical software that shows the
computation, and provides some additional information.
Test for Two Proportions
Sample X N Sample p
1 7 11 0.636364
2 8 18 0.444444
Difference = p (1) - p (2)
Estimate for difference: 0.191919
Test for difference = 0 (vs > 0): Z = 1.00 P-Value = 0.158
Notice that $Z = 1.00 < 1.645,$ so you do not reject $H_0.$ Your
data are consistent with equal failure proportions in the two populations.
The P-value is $P(Z > 1.00) = 0.158.$ You could have rejected $H_0$ if
the P-value were smaller than 5%.
This test uses a normal approximation to the binomial, which may not be exactly
accurate for sample sizes as small as $n_1 = 11$ and $n_2 = 18.$ However,
the result is nowhere near the 'borderline' between rejecting and not, so
an error in the normal approximation is not likely to have made a difference
in your decision not to reject.