It's not possible to completely reconstruct the source code from compiler output. Compilation is an information-discarding process. You can never recover things like comments, for example.
Decompilation is often described as trying to turn hamburger back into cow, but in fact it's worse than that. While there's extensive academic research and some commercial effort into heuristic decompilation, trying to produce plausible sources from object code, it's impossible to guarantee the results would achieve what would have been created from the original sources. For example, it's quite possible that the original sources would have produced different object code on the new target platform. Object code does not embody all the semantic guarantees of the source language.
In the best case, when you have INT-code binaries and their accompanying IDY (debugging information) files, it's theoretically possible to reconstruct source files that resemble the input in various particulars. The INT format is relatively close to the structure of the procedural code in the source, and the IDY data contains information like data-item names and compilation options. (Some compilation settings can be recovered from the INT files themselves.) However, significant information is still lost, and we don't provide tools to do this.
If the programs are GNT or native binaries, and/or debugging information is not available, then recovering something resembling the source is even less plausible.
I hope the client now realizes that source code is a corporate asset, and failing to protect it has serious consequences. Not that that helps in this situation, of course.
Someone else may have some useful advice, but frankly I don't see a viable option here aside from analyzing the existing sources to determine a plausible source set, building a robust set of regression tests, and reconstructing the application under the old system before trying to move it to the new one.