In practical terms, if the signal frequency is high enough (like digital audio, video, rf etc), or with a lower frequency signal if the cable is long enough (like a telephone cable between cities), the cable itself behaves as a 'transmission line', and we become concerned with the transfer of power from one end to the other.
In that situation, some strange things happen. Most notably for us, if the end of the cable isn't correctly terminated, it acts like a mirror and the signal is reflected back towards the source which can cause all manner of problems depending on the circumstances. So, it is vital that the line is terminated with a resistance which is equal to the cable's characteristic impedance, and the output impedance of the source must also match the characteristic impedance... And that's why we have the notion of 'matched impedance' interfaces for high-frequency systems today.
The old 600 Ohm thing you see with a lot of vintage gear (and modern emulations) stems from the way the old telephone systems employed a 600 Ohm matched impedance format, and early pro audio gear technology was developed from the world of telephony!
But for normal audio frequencies, the cables we use -- even really long studio cables -- are far too short to behave as transmission lines, and it just makes no sense to maintain matched impedance interfaces. Hence the adoption of voltage-matched interfaces in the 70s, where the source (output) impedance is kept as low as possible, and the input impedance is kept as high as practicable -- usually with a ratio between them of 1:5 or 1:10, or something of that sort of order.