DN Staff

June 7, 2004

8 Min Read
Decentralized Control

History of centralized control in manufacturing and similar environments is-as with most technological developments-a double-edged sword. Centralization often can mean accessing an entire operation (or production line) by one person or a small group of people from a single point. Such complete surrender to one or a few pieces of hardware can be considered putting all your eggs in one basket. In a pure centralized design without redundancy, if the server goes down because of a power outage, system failure, or any other cause, satellite processes go down as well. In addition, centralization may not permit a quick response to changing conditions or to equipment differences between one geographic location and another. As a result, many manufacturers have moved in the other direction-distributing control "to the provinces."

Since microprocessors have become more powerful and less expensive in recent years, companies embed them in remote I/O devices, pushbuttons, sensors, and other components. These "smart parts" can execute control functions on a communications network close to the processes that they control. Recent advances-such as improved human-machine interfaces (HMIs)-allow engineers to carry control-room-type displays directly into the field in the guise of hand-held devices such as personal digital assistants (PDAs).

Look at the Application

Yet questions remain: How much decision-making capability belongs at a central point and how much belongs on the manufacturing floor? Where and when do you still need centralized control? Can distributed intelligence benefit from any functions that must remain centralized? In the resulting continuum of solutions, each approach has advantages and drawbacks, and provides the best answer under certain circumstances. Where do you draw the lines?

According to Sam Herb, an automation platform manager for Invensys Foxboro, manufacturers must consider the number of required control loops, process complexity, need for advanced process control and optimization, cost of downtime and the resulting need for redundancy, and the need for security. He contends that large, elaborate applications involving sophisticated fail-safe designs, fault tolerance, or advanced control strategies benefit from centralized control. Also, centralized systems facilitate process validation and documentation in response to regulatory demands, ideal for applications like pharmaceutical manufacturing and pollution control.

At the same time, centralized control requires enormous communication bandwidth to transmit process parameters and other data to the control point and experiences an inevitable lag time between a data point's generation and the system's response. Tracy Lenz, senior product support engineer for Wago, notes that engineers must calculate anticipated data traffic and design the fieldbus to accommodate it. Decentralized distribution permits monitoring inputs and encoders locally. A decentralized controller can react much quicker to high-speed inputs than can its centralized counterpart, communicating with the main processor only to report that a routine is complete.

Consider, for example, the power-on self-test of a typical control system. If the central facility runs the test directly, then test instructions, measurements, and responses must travel across the network, which can bog down the test and may introduce bus contention and other errors. A built-in self-test accomplishes the same task much more quickly and easily. A single-bit instruction from the central location triggers it (or it can be triggered locally by certain boundary conditions), and the only necessary response is a single-bit, pass-fail. Of course, to be more useful, the self-test could return one of a number of error codes on failure to permit troubleshooting the control system, but even those codes represent only a small fraction of data density required in the centralized case.

Interruption Intolerance

Lenz suggests that decentralization also permits a programmed response to a main processor failure or a failure of network communication. Some processes cannot tolerate interruptions. Depending on circumstances, a decentralized architecture can continue a process unabated, initiate a sequential shutdown, or execute a complete but controlled shutdown.

Manufacturers can apply decentralization to older systems as well. Lenz describes a multiple-batching arrangement where the main HMI downloads instructions to the decentralized controller, which controls local product batches while reporting data to the network. The main controller serves only as a global network interface.

Early control systems exhibited primarily master/slave relationships, recalls Gary Marrs, field application engineer for Lantronix Inc. The mainframe, programmable logic controller (PLC) or other controller executed programs and managed all I/O points and communication with remote nodes. Marrs points to three major drawbacks to this approach:

  • Remote communications were very inefficient.

  • The centralized system suffered from fault intolerance.

  • Some central designs can be expensive to maintain.

Distributed control addresses these issues. Peer-to-peer (multi-master) architectures permit locating controllers and their I/O points close to the devices being controlled. The system can process real-time control loops locally without burdening the data hub, communicating directly with other controllers to send or receive data. If one controller fails, the rest of the system can still function.

According to Marrs, the PC revolution has helped expand distributed control. Before PCs, most PLC and distributed-control system (DCS) vendors chose proprietary protocols to tie users to their products. PC open architectures have helped promote standardization and ease of use while lowering costs. The proliferation of true distributed control is limited by a lack of a single network standard. Fortunately, the situation is improving. Standards like OLE (Microsoft's Object Linking and Embedding) offer powerful tools to encourage interoperability. Web servers allow data access over the Internet to any computer with a browser. XML and Simple Object Access Protocol (SOAP) permit sharing data over a distributed environment.


Control can be distributed in enclosures on the plant floor.

'To Ethernet or Not to Ethernet'

To address the issue of non-standard industrial protocols, many manufacturers are turning to architectures built around Ethernet-especially as newer versions of the standard permit higher communication speeds. Wago's Lenz points out that most industrial buildings already have Ethernet capability. Connecting decentralized controllers via Ethernet-based communications eliminates customized fieldbus wiring. Connections can spread out on a large scale within a single building or to other buildings, yet still permit communication with the main controller. If the Ethernet carries too heavy a load and bogs down, the decentralized controllers still function.

Ethernet adds other dimensions to the equation as well. High-powered radio systems can carry communications from the main controller to local devices, as with a municipality's pump station or water works. Implementations of IEEE standard 802.11 permit quick and easy wireless interfacing, and PDAs allow engineers to hook up to systems directly, even in the field. HMI software can be loaded onto the PDA for control-system access. When automating control of a building, for example, a supervisor can check the status of lights on any floor or change temperature and other environmental conditions from these common palm-sized tools.

Ethernet is a hot topic addressed in previous articles in DN 01.12.04, page 44 (top), "At SPS, a New Round of Ethernet Wars," (http://rbi.ims.ca/3850-560)-but shouldn't be considered the best or only solution for industrial networking. Doug McEldowney, a manager at Netlinx division of Rockwell Automation, says that "tweaking" Ethernet communications may not provide the best results.

The architecture of industrial networks, such as DeviceNet, Interbus, Profibus, and others specifically addresses factory applications and may therefore represent a better choice. McEldowney also suggests that managers' perceptions that Ethernet is less expensive than other solutions may spring from a misunderstanding of what it takes to make the system work. Ethernet requires active components (switches). Although not difficult to set up, a switch-based architecture is somewhat different from what most managers experience when implementing Ethernet in an office environment. Many network architectures designed specifically for the factory floor, on the other hand, permit simpler installation with a lot of flexibility. Also, most Ethernet uses a tree/star topology for point-to-point communication. Again, although adequate in office applications, this approach may slow operations on the factory floor. The bus topology of the other solutions can be more efficient.

He notes that Ethernet, like industrial networks, varies in protocols. By the same token, he believes that Ethernet advances will expand its capabilities to accommodate additional applications. For example, the IEEE 1588 Time Sync Protocol standard will permit Ethernet to synchronize distant nodes precisely. Such synchronization, in turn, will allow applications to be even more widely distributed, yet maintain tight integration across the network.

Web Resources

Check out the links below for more info

Rockwell Automation:
http://rbi.ims.ca/3850-563

Sign up for the Design News Daily newsletter.

You May Also Like