We have numerous logical systems. For example PA is a (sort of) formal definition of number theory. PA can (I guess) be seen as a fragment of ZFC, which makes it weaker than ZFC. Further there seem to be numerous fragments of PA which are, of course, strictly weaker than PA. This shows there exists an ordering between the systems wrt. to their expressiveness.
In computation theory we usually deploy a Turing-complete model and restrict that to get less expressive "fragements". For example, we can put time constraints and deal with total polynomial time Turing machines, or we can define a programming language that quarantees halting of the programs written with it and, hence, it falls short in computational power from Turing machines. This shows we have some kind of ordering here, too.
Now, these two concepts seem to be closely related but I don't understand how. Is it possible to, say, use PA to define a programming language that would be a fragment of Turing-complete language? Or could we use Turing-complete model to define a formal logic that would be as expressive as one could ever be (I guess not)? Or are logic and computation after all incomparable but they just happen to interwind?
I'm aware of Curry-Howard correspondence, but that seems to just deepen my confusion on the matter. I'm also aware that some logics coincide with computational complexity classes and I have studied some finite model theory, but that hasn't really helped to understand the big picture.
I know the question goes a bit vague and I would be perfectly happy with pointers to literature that would explain what is going on.