Controller performance may be measured by simply
calculating the measurement-setpoint variance.
Simple, but of limited use as this measure depends
on the underlying disturbances of the process -
the more disturbances, the larger the variance,
even if the controller model fidelity and tuning
remain the same. And it's scale dependent, which
makes it difficult to scan large number of
controllers for unacceptable values.
Most controller performance software packages get
around these limitations by employing a performance
index, which compares the current variance
to that which would be obtained if an "optimal"
controller had been applied to the process over
the same time range.
Advantages? The disturbance effect is
theoretically removed, as both the actual
controller and the optimal controller are subject
to the same disturbances. And because a ratio is
taken, the number is naturally scaled to be
between zero and one.
But what is this "optimal" controller? Is it an
adequate representation of what could be applied
in practice? Most software packages use a minimum
variance controller as the optimal
controller (this is the basis for the Harris
Index). This may not be a reasonable standard - a
minimum variance controller is essentially a PID
controller with deadtime compensation. But most
controllers in a plant don't have deadtime
compensation, and usually don't contain derivative
action (as it can be troublesome in practice).
So comparing PI controllers to a minimum variance
controller is often not a reasonable comparison.
Rather than using a minimum variance controller,
the Control Arts ControlMonitor package determines
what the variance would be if a well-tuned PI
controller had been applied to the process
over the same time frame. And because PI control
is practically possible, this is a much more
realistic, stable, and accurate standard for a
controller performance metric.
|