I hope this isn't off topic - sorry if I'm wrong.
In 1976, Kenneth Appel and Wolfgang Haken proved the claim (conjecture) that a map can always be coloured with four colours, with no adjacent regions given the same colour. This was controversial because the proof process required a computer to evaluate many different cases.
From what I've read, the controversy was (1) that no human could reasonably confirm the proof, and (2) that being a computer proof, all it did was give a yes/no answer to the question - it didn't contribute to the understanding of the problem.
The first issue seems to be a non-issue now - the proof has been replicated using different software. But the second issue seems to stand - and I don't really understand that issue.
Quite a few proofs require a number of particular cases of the problem to be separately checked. Having eliminated every possible case where the answer might have been "no" is a routine way of proving that the answer is "yes". Does it really make any difference in principle whether the number of cases is 2 or 2 million? Does the scale of that number make any difference to the degree of human understanding of the problem?
And, given that people wrote the software that evaluated all the cases, all the ideas underlying the proof seem to be understandable by, and to have been understood by, people.
To me, having evaluated a thousand different variations on the same kind of a problem shows no more understanding than evaluating one. Solving a thousand quadratic equations, for instance, is much the same as solving one - all the repetitions are just plugging different numbers into the same formula, or repeating the same completing-the-square procedure, or whatever.
Therefore, I am very much impressed that Appel and Haken were able to understand the problem in sufficient depth that they could write a program to derive what all the special cases are and check them. Writing software to reliably determine all the cases often shows even deeper understanding than manual derivation of all the cases, where deeper understanding can be bypassed to a degree by trial-and-error.
Getting the computer to run the program, once written, seems irrelevant to me. The program presumably could have been (eventually) executed by a person, in the same way that it could have been executed by a Turing machine, but doing the mechanical follow-the-steps stuff seems irrelevant to depth of understanding.
Is there something I'm missing?