Yes, every proof in propositional logic consists of a finite formation sequence.
Yes, "proofs" can get found which can't get done in terms of real-world computing power/hand-writing. Ultimately, we have a finite amount of computing power before all of our computers cease to exist, and we as a species do also. A proof consists of a finite sequence of formulas (wffs) which ends with a theorem, where each formula gets inferred validly by a rule of inference or qualifies as an axiom. Nothing says proofs can't stupidly repeat sequences which go nowhere that the proof hasn't gone already. For example, here's a proof (in Polish notation and in a natural deduction system) of the weak law of identity:
1 | p assumption 2 | App 1 disjunction introduction 3 | KpApp 1, 2 conjunction introduction 4 | p 3 conjunction elimination 5 | App 4 disjunction introduction 6 | KpApp 4, 5 conjunction introduction 7 | p 6 conjunction elimination 8 Cpp 1-7 conditional introduction
Now, of course, there exists little point in steps 5-7 in terms of proving the theorem (and even steps 2-4, but I digress). But, proofs "exist" (in principle), which repeat the subsequence of formulas from steps 2-4 over and over again seemingly without end immediately starting after step 4, and end with Cpp after the last subsequence. So, say all the computing power in the multiverse/universe could only write 10$^{1,000,000}$ of those subsequences in any sequence of formulas. Well, a sequence of formulas with 10$^{2,000,000}$ of those subsequences, and then Cpp after that last subsequence still qualifies as a proof, even though by hypothesis the material resources to actually write such a proof don't exist (let alone any sort of being that would ever read it!).
So, yes, there exist proofs which can get "found" but not done.