Control loops incorporating proportional, integral, and differential (P, I, and D) factors have become standard functions of motion controllers. Many of the latest controllers also offer feed-forward functions. A typical control loop might combine three PID gains and two feed forward gains. With such a system, the many gains must be juggled to make the system work properly. In hydraulic systems, for instance, separate gains should apply to each motion direction—that is, extending or retracting an actuator—which also must be set and tuned. It is easy to see why one of the first questions asked is often, "What values should I use for the gains?"
Start by setting the proportional gain high enough to make the system respond at the correct rate. Next, change the integrator gain to remove offsets, the errors between actual and target values that aren't eliminated by the proportional term. Then, set the differentiator factor to add damping to settle the system faster.
These rules of thumb do not take into account how changing one PID gain requires changing the other two. For instance, increasing the proportional gain in a critically damped system will usually require increasing the integrator and differentiator gains to keep the system critically damped. Failing to adjust the integrator and differentiator gains accordingly will result in an under-damped system that will overshoot its target position or velocity values. It takes time to get a feel for how the PID gains need to change together.
Post your comment about this topic or read what others have to say -- go below to "Sound Off."
Graphs Aid Tuning
The ability to graph desired (target) motion versus actual motion with fast time resolution (millisecond level) provides a sure way to tune a system through an iterative process. The plot of a poorly tuned system will show actual and target motion profiles differing by an amount equal to the error at each point in time. Conversely, the plot of a well-tuned system will show actual and target motion profiles overlapping. Even if one does not know what the different PID and feed forward gains do, it is easy to tell if a change made the error better or worse. Note that tuning the system in this manner can be very time consuming. With practice, one learns how to interpret the graphs to find the best tuning solution quickly.
Improving the Tuning Process
Many motion controllers have evolved to augment trial and error tuning with a means of automatically calculating the PID and feed forward gains after analyzing a motion profile or two. The controller might analyze a step response, or the response to a swept sine wave, or an arbitrary motion using least squares system identification to make this calculation. These techniques create a model or transfer function that will return an estimated position that closely follows the actual position given the same control signal drives both the real system and the model.
After determining an accurate model, feed forward gains can be calculated immediately because they depend only on the system model. In fact, the transfer function for the feed forwards is the inverse of the actuator transfer function.
The PID gains require one more item before they can be calculated—the desired response. Ideally, one should be able to specify the desired results without caring about the means of achieving them. This removes a lot of the guess work from tuning because few people can picture how the poles and zeros of the system move with changes to the PID gains.
Calculating the Model
Calculating the system model accurately is the hardest part of calculating the gains. The least squares system identification method requires capturing the values of the control output and the actual position as a function of time. This data is used to calculate the coefficients of an assumed model expressed as a difference equation, with the current estimated calculation to be a function of the control output and velocity during the previous update.
A difference equation for a model of a simple first order lag system:
Est(n) = A1*Est(n-1)+ B1*u(n-1)
Est is an array of estimated velocities that is indexed by the update interval.
A1 and B1 are coefficients to be determined by the least squares identification.
u(n) is the control output at update index n.
A two-pole system may look like this:
Est(n) = A1*Est(n-1)+A2*Est(n-2)+B1*u(n-1)+B2*u(n-2)
In this case, the current estimated velocity is a function of the last two estimated velocities and the last two control outputs. As the model's complexity increases, so will the number of A and B terms. The least squares system identification calculates the A and B coefficients to minimize the integrated squared error between the actual and estimated velocity at each interval in the sample of data. To generalize, the equation for the least squares system identification applying to a complete motion profile is:
is the array that hold the resulting A and B coefficients.
Y is an array of actual velocities for each time period n.
is an array of the last actual velocities and control outputs for each time period n.
The formula for the least squares system identification looks simple, but it actually involves inverting and multiplying some very large arrays. This is not a problem if it can be done on a computer. The advantage of this method is that it doesn't need a subjective interpretation of the data. The control signal can be arbitrary. The coefficients A1...An and B1...Bn are statistically accurate as the least squares method finds the best values of the coefficients that provide the best fit. One can use closed loop or open loop moves to collect the control output versus actual position data. This is handy when trying to tune two axes that are tied together as the system can be tuned coarsely first so the two axes track fairly well, then the least squares system identification can be applied to get better values for the PID and feed forward gains.
About Feed Forwards
Feed forwards are gains that are multiplied by the target velocity and target acceleration to predict the control output required for that state. The target velocity and acceleration are generated in each time period by the target or motion profile generator functions in the motion controller. A good target generator will generate a smooth motion profile between requested positions or speeds without discontinuities in the acceleration or in the control output.
Ideally, the feed forward gains are the inverse of the system model, supporting the following relationship for acceleration, velocity, or position:
Target value * feed forwards * system transfer function = Actual value.
In a perfectly tuned system, the differences between the target and actual positions, velocities, and accelerations are zero. To achieve this, the feed forwards must be the inverse of the transfer function.
Consider a system with the transfer function G/(t*s+1). This is a transfer function for a simple velocity system with a single time constant t and a system gain of G inches per second per volt. The inverse of this transfer function is (t*s+1)/G. This results in the velocity feed forward being 1/G and the acceleration feed forward is t/G. Even if the model is off by 10 percent, the feed forward terms would supply most of the control output and the PID factors of the control loop would only need to correct for the last 10 percent of error. The feed forwards do most of the work, and the system will be both precise and stable—tracking very accurately with relatively low PID gains.
Using Control System Theory
A pole is a concept from control theory that relates to an object's response to stimulus. Each pole has a cutoff or corner frequency where signal attenuation and phase delays occur. In the real world, you can imagine this as a rate of change in motion stimulus at which the system lags in response. Such frequencies should be avoided for proper operation of the machine. At a pole frequency, the effectiveness of the drive signal on the motion is attenuated by 3 dB and experiences 45 degrees of phase lag. The attenuation increases as the frequency of operation increases. As a rule, it is best if the motion system is operated at 1/10 the pole frequency. At that rate, little attenuation and phase delay occur.
Assuming the system gain and the cutoff frequency/system pole can be calculated, it is easy to calculate the PID and feed forward closed loop gain parameters for a positioning system. The equations for a critically damped response are:
Kp, Ki, and Kd are the PID gains.
Kv and Ka are the velocity and acceleration feed-forward gains.
G is the system gain in velocity units/volt of stimulus. The gain is calculated as a part of the system model.
a is the observed system pole frequency in radians per second. This is the inverse of the system time constant and it is calculated as a part of the system.
? is the desired pole frequency in radians per second.
As ? grows, the system responds faster. Notice that lambda must be greater than 1/3 the pole frequency or the differentiator gain will be negative. To avoid this, another set of equations for over-damped response is needed. The advantage of using this mathematical approach is that it can calculate the gains closely enough so all that is needed are final tweaks to adjust for any machine requirements that automated tuning can't or doesn't take into account. Calculating the equations for each model and type of response is relatively easy with a symbolic math package.
An example of one real world tool that has been developed to put the above theory into practice is the Tuning Wizard, recently announced by Delta Computer Systems. The Tuning Wizard provides the user with a visual interface to control the automated tuning process and a single slider bar to select the desired system response. Using the slider, the user instructs the Tuning Wizard to move the poles of the system from locations that result in a conservative system response to those that result in an aggressive response. The Tuning Wizard's Slider bar allows the user to focus on the response rather than on the gains as a means of achieving the response.
Auto-tuning is not Always the Answer
To work well, automated tuning techniques require a good match between the actual machine behavior and the behavior assumed by the tuning software. In addition, machine characteristics such as non-linearities, natural resonances, dead bands, and feedback delays or noise may limit the overall effectiveness of the auto-tuning process. Thus, auto-tuning techniques get us closer to a perfectly-optimized loop and help us do faster tuning, yet they may leave us wanting to tweak things to optimize control, especially on less than ideal systems. Identification of better system behavior and improved modeling are helping as technology improves.
Contributing writer Peter Nachtwey is president of Delta Computer Systems of Vancouver, WA. You can reach him at firstname.lastname@example.org@email@example.com.
There are many books about control system theory that can provide additional insight into mathematical modeling and control algorithms. A personal favorite is Digital Control System Analysis and Design by Charles L. Phillips and H. Troy Nagle. For more on the Tuning Wizard, visit the Delta site.