I'm a bit rusty about all this, but--with respect to the objective of extracting an exponential component--referring to http://mathworld.wolfram.com/LaplaceTransform.html the table shows that the inverse transform of an exponential is a delta function, so along with the linearity this seems to be telling us that the "ILT" in principle does what we want. In practice of course it may involve non-trivial effort to get usable results eg due to "ill-conditioning".
The real exponentials aren't orthogonal, at least under the traditional defn of inner product. Perhaps the cleverness of Laplace was in coming up with a different inner product?
As I recall, orthogonality in general is typically defined involving a "kernel" or "weight" function. Changing the variable produces another transform and changes this function--which can include making it drop out, although it's effectively "still there"--along with other modifications, such as distorting the path of integration. For example a discrete sum of exponentials can be viewed as a power series with x^n --> e^nt. The Hadamard transform then uses the same underlying orthogonality as the Fourier and Laplace transforms to pick out the x^n coefficient in a power series--the only difference is the weight function and the path of integration--which entails an excursion out into the complex plane even though the resulting value is purely real. Orthogonal transforms often exhibit these kinds of dualities, eg time-limited functions must be unbounded in frequency, etc. So even if the input-->output is purely real-->real the intermediate calculations may get complicated. If the basis functions we want to expand in are *truly* non-orthogonal then expansion can be non-unique--for example there might be only one inverse with bounded discrete frequencies, but also an infinite cloud of "neighbor" expansions with uncountably many fractional frequencies. These ghostly "off line" attractors can bollix convergence.