Why does time delay, and correspondingly phase lag, almost always cause a system to go unstable? Physically, an imbalance between the strength of the corrective action and the system dynamic lags results in the corrective action being applied in the wrong direction. Mathematically, when the denominator of the closed-loop transfer function equals zero, the system goes unstable. Since this denominator is equal to 1 + the open-loop transfer function, when the open-loop transfer function equals -1 (i.e., magnitude = 1 and phase angle = -180 degrees), the closed-loop system is marginally stable.
If a system is stable, how close is it to becoming unstable? Because of model uncertainties, it is not merely sufficient for a system to be stable. It must have adequate stability margins. Stable systems with low stability margins work only on paper. The way uncertainty has been quantified in classical control is to assume that either gain changes or phase changes occur. The tolerances of gain or phase uncertainty are the gain and phase margins.
A paradox is that the presence of delays may be either beneficial or detrimental to the operation of a dynamical system. Judicious introduction of a delay may stabilize an otherwise unstable system (e.g., a wait-and-act control strategy) or improve steady-state tracking error.
The impact of delays continues to grow in many fields, including the control of distributed systems such as energy and computing grids.