Process control and optimization (PCO) is the discipline of adjusting a process to maintain or optimize a specified set of parameters without violating process constraints. The PCO market is being driven by rising demand for energy-efficient production processes, safety and security concerns, and the development of IoT systems that can reliably predict process deviations. Fundamentally, there are three parameters that can be adjusted to affect optimal performance.
- Equipment optimization: The first step is to verify that the existing equipment is being used to its fullest advantage by examining operating data to identify equipment bottlenecks.
- Operating procedures: Operating procedures may vary widely from person-to-person or from shift-to-shift. Automation of the plant can help significantly. But automation will be of no help if the operators take control and run the plant in manual.
- Control optimization: In a typical processing plant, such as a chemical plant or oil refinery, there are hundreds or even thousands of control loops. Each control loop is responsible for controlling one part of the process, such as maintaining a temperature, level, or flow. If the control loop is not properly designed and tuned, the process runs below its optimum. The process will be more expensive to operate, and equipment will wear out prematurely. For each control loop to run optimally, identification of sensor, valve, and tuning problems is important. It has been well documented that over 35% of control loops typically have problems. The process of continuously monitoring and optimizing the entire plant is sometimes called performance supervision.
The challenge for EWZ was to design the processes so that they could be traced easily and meet the standards for handling exchange processes in line with the market such as automation of bulk data exchange and exchange processes as well as their implementation according to the SDAT and MC-CH industry documents, integration of the very heterogeneous IT landscape, and no in-house development but a tried and tested standard product.
For over 30 years Elekta has been responsible for introducing many market leading, critical technological advances in radiation oncology. Elekta’s Linear Accelerators support a range of pre-configured and optional delivery techniques, providing the physician with the flexibility to tailor the treatment to suit the needs of each individual patient. Remote access was unavailable on complex devices and Elekta was unable to understand machine performance based on real data, which meant the company could not respond immediately to customer problems. This, in turn, meant that Elekta was unable to detect and solve problems on the spot, restricting the company to a reactive service model. A typical patient treatment schedule could require daily treatment for a period of 4-6 weeks, making machine uptime extremely important. Given the complexity of the device and the amount of data that would need to be collected, it was obvious that an automated solution was required. The infrastructure would need the ability to manage the data volume from the devices and also provide a platform that would scale in both size and complexity. “When a customer buys a service agreement, what they are buying is system uptime. They are buying the ability to consistently deliver treatment… patient to patient,” says Martin Gilday, Vice President of Services at Elekta. “Service is not simply a single engineer with a trunk load of parts, but a full portfolio of technical expertise delivered via multiple channels...it’s a big change in the way that you think about service.” Elekta’s success has been built upon innovative engineering, which led to a culture of in-house work. As the company matured and grew, it realized that this was not necessarily the most effective way of working. With the complexity of the product itself, internal engineering expertise would be crucial to solve the company’s service challenges. While the product itself was unique, the methodologies and best practices of accessing and servicing the product were based on industry standards. When customers have service issues that need to be resolved, machine data must be collected and interpreted, and engineers need a deep level of skills and understanding. An integrated solution that brought the data, knowledge, and skills together in a highly connected environment was required. “Customers see connectivity as providing efficiency gains,” says Bruce Fullerton, Vice President of Service and Support. “Their key performance indicators are based around how many customers they can treat in a given space of time.”
A theme park was always looking for ways to enhance the customer experience, and minimizing wait times at attractions and foodservice outlets was one of the key concerns. Tracking and analyzing visitor behavior could potentially let management address it. The first step would be to have the analytics software look at things like advanced ticket sales, weather forecasts and previous attendance records to help management make staffing decisions several days in advance. Collecting and analyzing data could also improve day to day operations. If equipped with the right kinds of information, the control center could make informed decisions about where to place staffing resources. If the line at a water ride was getting too long, management could add staff and launch more boats. If the line at a foodservice station was getting too long, management could send costumed characters to hand out menus and entertain the guests, thus making the wait less painful. To track where visitors were and what they were doing, park management would need to gather data from devices like surveillance cameras, foodservice cash registers, and ticketing machines. But the plan would also require placing some of these devices in locations where trenching and installing new cable connections would be both disruptive and expensive.