IFAC blog page

Author: ifacblog (page 2 of 3)

Autonomous road vehicles: a modern challenge for robust control

The idea of autonomous cars has been in the air as early as the 1920s, but the first prototypes of truly autonomous (albeit limited in performance) road vehicles appeared only in the 1980s. Since then, several companies, e.g.,Mercedes, Nissan and Tesla, as well as many universities and research centres all over the world, have pursued the dream of self-driving cars. More recently, a few ad-hoc competitions and the increasing interest of some big tech companies have rapidly accelerated the research in this area and helped the development of advanced sensors and algorithms.
As an example, consider that Google maintains (and publishes) monthly reports including the experimental tests and the most recent advances on its driverless car.

The reasons why such a technology is not yet on the market are many and varied. From a scientific point of view, autonomous road vehicles pose two major challenges:

  • a communication challenge: how to interact with the surrounding environment, by taking all safety, technological and legal constraints into account?
  • a vehicle dynamics challenge: the car must be able to follow a prescribed trajectory in any road condition. On the one hand, the interaction with the environment mainly concerns sensing, self-adaptation to time-varying conditions and information exchange with other vehicles to optimize some utility functions (the so-called “internet of vehicles” – IoV).

These issues undoubtedly represent novel problems for the scientific community and have been extensively treated in the past few years. On the other hand, control of vehicle dynamics may seem a less innovative challenge, since electronic devices like ESP or ABS are already ubiquitous in cars.

Within this framework, robust control, namely the science of designing feedback controllers by taking also a measure of the uncertainty into account, has played a central role. However, by taking a deeper look at the problem, it becomes evident that the main vehicle dynamics issues for autonomous cars are more complex than those concerning human-driven cars and the standard approaches may be no longer effective.

Actually, path planning and tracking is a widely studied topic in robotics, aerospace and other mechatronics applications, but it is certainly novel for road vehicles. In fact, in existing cars, even the most cutting-edge technology devices are dedicated to adjust vehicle speed or acceleration in order to increase safety and performance, whereas the trajectory tracking task is always fully left to the driver (except for few examples, like automatic parking systems).

Nonetheless, most of vehicle dynamics problems arise from the fact that the highly nonlinear road- tire characteristics is unknown and unmeasurable with the existing (cost-affordable) sensors. Therefore, keeping the driver inside an outer (path tracking) control loop represents a great advantage in that she/he can manually handle the vehicle in critical conditions (at least to a certain extent) and make the overall system robust to road and tire changes. This is obviously not the case for autonomous vehicles.

Hence, it seems that standard robust control for braking, traction or steering dynamics could turn out to be “not robust enough” for path tracking in autonomous vehicles, because one can no longer rely upon the intrinsic level of robustness provided by the driver feedback loop. In city and highway driving, this fact may not represent a problem, because the sideslip angles characterizing the majority of manoeuvres are low and easily controllable [8]. However, in the remaining cases (e.g., during sudden manoeuvres for obstacle avoidance), a good robust controller for path tracking, exploiting the most recent developments in the field, could really be decisive to save human lives in road accidents.

It can be concluded that still a few important questions need an answer by robust control people, e.g.:

  • “can we provide a sufficient level of robustness with respect to all roads and tire conditions, without decreasing too much the performance?”
  • “are we able to replicate the response of an expert driver to a sudden change of external conditions?”
  • “how can we exploit at best the information coming from the additional sensors usually not available on-board (e.g., cameras, sonars…)?”

but also many others.

IEEE experts estimate that up to 75% of all vehicles will be autonomous by 2040. This scenario will be accompanied by significant cost savings associated with human lives, time and energy. As control scientists and engineers, it really seems we can play a leading role towards this important social and economic leap.

Download the article
PDF document  with references can be downloaded here (150Kb)

 

Article provided by
Simone Formentin, PhD, Assistant Professor
IFAC Technical Committee 2.5: Robust Control 
Share

Controlling an autonomous refuse handling system

With potential to increase both safety and quality aspects in our daily use of and interaction with vehicles, autonomous vehicles are currently a major trend in the automotive industry. The initial focus up to now has been on autonomous driving of passenger cars, like platooning and queueing assistance etc. There have also been initial tests with systems of construction equipment that perform autonomous asphalt spreading and gravel loading etc. A further step to extend and improve the service we experience today, might be to combine vehicles and peripheral support devices to join autonomous driving with autonomous loading and unloading of goods. In the future, an autonomous electrified distribution truck might for example work together with support devices to enable autonomous loading and unloading of goods to and from our doorstep just hours after we ordered a pick-up or delivery service online.

The Robot based Autonomous Refuse handling (ROAR) project is a first attempt to demonstrate such an autonomous combination. An operator driven refuse collection truck is equipped with autonomous support devices to fetch, empty, and put back refuse bins in a predefined area.

The physical demonstrator in the ROAR project constitutes one truck and four support devices. When the truck has stopped in an area, a camera-equipped quadcopter is launched from the truck roof to search for bins and store their positions in the system. As bin positions become available in the system, an autonomously moving robot is sent out from the truck to fetch the first bin. The system’s path planner calculates the path to the bin as an array of waypoints. The planner calculates paths based on a pre-existing map of the area. Upon following the waypoints, the robot is intelligent enough to avoid obstacles that are not on the map. To accomplish this detection, the robot is equipped with a LiDAR and ultrasonic sensors.

After reaching the last waypoint, the robot changes from navigation to pick-up mode. By exploiting the LiDAR and a front facing camera, the exact position and orientation of the bin can be detected. The robot aligns itself so that the bin can be picked up.
After the pick-up, the planner provides the robot with a new path back to the truck. After the last waypoint, the robot aligns with the lift at the rear of the truck. The lift is set at a pre-defined angle, so that the robot can move up to the lift and hook the bin onto it. During the emptying of the bin, the lift system monitors the area around the lift with a camera to assure that no person is in the way for the lift. If so, the lift movement is paused until the area is clear.

An emptied bin is picked up by the robot and returned to its initial position, once again based on a path from the planner. When reaching the initial bin position, the bin is put down. The robot can thereafter move to the next bin to be emptied, and the emptying procedure is repeated.

When there are no more bins to empty, the robot moves back to the truck and aligns itself with the lift. Similar to a bin, the robot is hooked on to the lift and the overall procedure is completed. The truck can thus be started and be driven to the next area.
The coordination of the truck and the support devices is based on a discrete event system model. This model abstracts the overall emptying procedure into a finite number of states and transitions. The states capture distinguishable aspects of the system, such as for example the positions of the devices and empty/full states of the bins. The transitions model start and completion of the various operations that the devices can perform. All steps in the above description of the emptying procedure can be modeled by such operations.

The investment in the discrete event model carries a number of attractive properties. During the development phase, the model can be derived using formal methods. Verification as well as synthesis (iterative verification) is then employed to refine an initial model to satisfy specifications on the system.

Moreover, the development of the actual execution of an operation can be separated from the coordination of the operation. As an example, consider the operation modeling that the robot navigates along a path. From an execution point of view, the operation must assure that given a path the robot eventually ends up at the last waypoint without colliding with any obstacle. From a coordination point of view, the operation must only be enabled when there is a path present in the system and the robot is positioned close to the initial waypoint.

The model contains two types of operations; operations that model the nominal behavior, and operations that model foreseen non-nominal behavior. The recovery operations in the second group can for example describe what the system can do when the robot cannot find a bin at the end of a path, or how to re-hook an incorrectly placed bin on the truck lift.

The discrete event model can also be exploited to handle more severe recovery situations, after unforeseen errors. As part of the development, the restart states in the system are calculated from the model. Upon recovery to simplify the resynchronization between the control system and the physical system, the operator sets the active state of the control system to such a restart state and modifies the physical system accordingly. By recovering from a restart state, it is guaranteed that the system can eventually finish an ongoing emptying procedure.

The truck and the support devices are connected using the Robot Operating System (ROS). ROS is an operating system-like robotics middleware that among other things enables message-passing between components defined in the system. Two types of messages are used in the ROAR project. The first type is messages related to starting and completion of operations. An operation start message is triggered from a user interface and is translated into a method call in the support device executing that operation. Under nominal conditions, this support device will eventually respond with a message saying that the operation has been completed. Both messages will update the current active state of the control system.

The second type is messages related to transferring data. Data transfer can be both internally within the programs connected to a support device and externally between support devices. An example of external data transfer is a path that is created in the path planner and then transferred to the robot.

During execution, the discrete event model is hosted on a web server. Interaction with the model is facilitated by the server’s API. Operator interaction is accomplished through a web based user interface. By enabling a web-based interface an operator can access the model using any device connected to the system’s network. This can for example be a computer in the truck cabin or a touchpad strapped to the operator’s forearm.

At the other end, ROS is also connected to the API. As pointed out before, this connection enables that operations started by the operator through the user interface are translated into method calls in the appropriate support device. Completion of the operation execution is translated into a post-request in the API. This will update the discrete event model to capture that the operation has been completed.
The physical demonstrator in the ROAR project is limited to a single robot for the bin handling. A next step could be to include more bin handling robots. For the specific field of application with refuse handling, more bin handling robots could enable higher efficiency in the emptying procedure. Many robots might also permit that the noisy truck can be parked further away from the bins, and thus cause less disturbance where people live. Today this is to be avoided because a distant truck will force the operator to walk too long.

From a more general point of view, coordination of multiple autonomous devices is an open research question. The two extremes are that the coordination is either performed from one central unit to which all devices are connected, or that the devices are intelligent enough to solve the coordination internally among them in a distributed manner. The two major coordinating challenges to handle is distribution of tasks between the devices and distribution of space where the devices can operate. The overall goal is thus so accomplish all tasks in some optimal way assuring that no devices are physically blocked in the operating area.

The productification of this overall control and coordination between one truck and several autonomous support devices is an interesting challenge. Imagine a future scenario where a haulage contractor company orders a new system. The truck is perhaps ordered from company A, with heavy-duty equipment from company B. The equipment is complemented with support devices from company C and company D. To operate properly, the system should also use services from the cloud, provided by some companies E and F. To further add to the equation, it is likely that operators are also in the loop to cope with unforeseen situations, complex item handling and parts of the decision making.

All in all, this text has only cracked open the door for what will come after the autonomous driving of passenger cars that we see today. There are still many mountains to climb and standards to agree upon before other areas than “just” the driving becomes automated. The outcome of the ROAR project is thus only a small step on a long journey a head.

The ROAR project is initiated and lead by Volvo Group. Chalmers University of Technology, Mälardalen University. Pennsylvania State University take part in the project as being Preferred Academic Partners to Volvo Group. The intention from Volvo Group is that students through bachelor and master theses should perform most of the development.

Article provided by
Patrik Bergagård, PhD, ROAR Project Leader
Martin Fabian, Professor, Automation
IFAC Technical Committee 1.3: Discrete Event and Hybrid Systems
Share

Wireless Automation and Industrial Internet

This contribution deals with the present categories of automation of industry production and mobile communication, their contents as well as their interrelations and meaning for the industrial wireless communication.

Present categories in the area of wireless automation, which deserve a reflection, are “Industrial Internet/Industry 4.0“ and “5G Generation Mobile Networks“.

In formulating of a new future-oriented goal, a category that summarizes this goal, makes it identifiable and addressable for activities, is required. Regularly, such categories are being used in contemporary context to demonstrate that activities, products and services are corresponding to this future-oriented goal, independently of the extent of achievement of this goal. If it is about future-oriented goals a contemporary goal attainment is hardly realistic. High expectations will be raised but at short notice they cannot be met fully. The inflationary reference to the category covers the aim and results into rejection of the category itself. The duty of the expert committees shall be to make awareness of the future-oriented goals, to give orientation and to evaluate the steps of technical developments.

Industry 4.0
Initially, the project Industry 4.0 is a political initiative of the German Federal Government and part of the “New High-Tech Strategy – Innovations for Germany“. With this project, the industry shall be supported in the active contribution for a change of the industry production. However, the national initiative does not mean that there is only a national goal. Furthermore, the same goals in international competition in different categories e.g. Advanced Manufacturing or as part of Internet of Things (IoT) are aspired.

The focus is on a new organization and control of the value chain of the life cycle of products. This cycle shall be oriented on individual customer wishes. This includes a continuous information management from the idea of a product, over the development, production and distribution to the final customer up to the recycling including the related services. “Basis for this is the availability of all relevant information in real time through connection of all instances that are involved in the adding value as well as the possibility to extract the optimal added value chain of the data at any time”. The optimal processing of information needs an as good as possible digital reflection of the added value chain, their so called virtualization.

Undoubtedly, the communication plays a central role for the availability of all relevant information in real time. Furthermore, it is indisputable that the mobility of the objects involved in the production as well as the necessary flexibility of the production require wireless communication systems. However, subject of a current discussion is if the available wireless communication technologies fulfil the requirements of an „Industrial Internet” respectively which characteristics have to be aspired. In Germany the special working group AK-STD-1941.0.2 “Radio standardization and Industry 4.0” of the German Commission for Electrical, Electronic & Information Technologies of DIN and VDE (DKE) is working on this question. Experts of the fields Mechanical Engineering, Electrical Industry and the Digital Economy exchange information with focus on:

• relevant use cases for wireless communication in industry production,
• reference models for wireless communication,
• activities in standardization and specification and
• research activities.

The goal is to develop contributions for the Standardization Roadmap Industry 4.0. Two essential aspects have become apparent:

1. Even though, there are a lot of useful industrial wireless communication applications, additional more efficient wireless communication technologies will be necessary for the aspired change.

2. The increasing number of required radio connections for the Industrial Internet demands new concepts and solutions for an application-oriented and efficient usage of the wireless media.

These aspects address technical and political issues. This is highlighted by the recent project “ICT 2020 – Reliable wireless communication in industry“, funded by the German Federal Ministry of Research. A total of eight research projects address the aspects mentioned above from different perspectives. The coexistent coordinating research project deals with superior scientific questions of reliable wireless communication as well as with the coordination of the processing of the project.

5G
The category “5G” means “5th Generation Mobile Networks“ or “5th Generation Wireless Systems“. First international research projects regarding basis technologies and concepts are completed. In parallel to a second research initiative the standardization process shall be started at the end of 2015. The goal for this new generation arranges with the tradition of the development of telecommunication. It is about significant improvements of performance parameters, such as:

• 100 times higher data rate as present LTE networks (up to 10,000 Mbit/s),
• about 1000 times higher capacity,
• worldwide 100 billion mobile phones can be addressed simultaneously,
• extremely low latency periods, Ping less than 1 millisecond,
• 1/1000 energy consumption per transferred bit,
• 90 % lower power consumption per mobile service.

This development alone is no sufficient reason to pursue 5G in relation to Industrial Internet, especially, because there are restrictions of the automation industry against a scientific and technical dependency of mobile providers that shall not be neglected. However, it is remarkable that with 5G numerous application areas, so called verticals that exceed the classical telecommunication are in focus. One of these application areas is called Massive Machine Type Communications (MMTC) and addresses the industrial wireless communication. But, there are still many unanswered questions. For instance, do the development goals of these new technologies consider the requirements of Industrial Internet? Currently, the telecommunication community specifies the use cases as well as the requirement profiles. Who from the areas of machinery and plant engineering as well as electrical engineering accompanies the development in this application area? Will be there a complete integration of the wireless communication technologies into the concepts of Industrial Internet? This affects both, the consistent communication concept from sensor to command and control level and the illustration as digital representation for virtual production. After all, the communication is not only a means to an end but also an object of an industry plant. Planning, implementing and operation are not independently from the production plant. Who is dedicated to the device and system description? Who is responsible for engineering, guarantee of availability of the communication according to the production target? The application plays a different role than the traditional user of telecommunication. So, the question how the accomplishments of the user requirements can be guaranteed arises. This is the responsibility of the device and system manufacturer as well of the operator of the communication systems. But also requirements and conditions of the production plant are important. These interdependences and interrelations require an across sectoral standardization. The standard IEC 62657 (IEC 62657-1 Industrial communication networks – Wireless communication networks) may be the initial basis, which describes the requirements and conditions of industrial automation of the wireless communication. Furthermore, it is important to define the interfaces to radio standardization.

One important question to clarify: Is the client willing to pay for the availability of the information exchange?

With focus on the contents of the categories Industrial Internet and 5G it can be determined that the new mobile communication has great potential for an all-embracing provision of production relevant information as it is planned for the change of the industry production. It is, though, of essential importance to overcome the barriers between telecommunication and industry automation. This concerns the industry boundaries as well as the interfaces of the technical implements and their standardization. In first instance, a common language in literal sense has to be found. Based on this, the concepts that enable the integration of telecommunication into the industry automation have to be matched. Current research projects and new tenders offer the possibility for this. Moreover, the communication in expert committees and standardization bodies shall be used to make new telecommunication concepts usable for the change of the industry production. The success for the economy depends on their engagement regarding the global editing of the open question as well as their usage of the created political framework conditions.

Then, wireless communication will provide the potential to influence the automation concepts. Categories shall be used for technical orientation and less as a marketing instrument.

Article provided by
Ulrich Jumar 
Lutz Rauchhaupt , Institute of Automation and Communication e.V. at the Otto-von-Guericke-University Magdeburg, Germany
IFAC Technical Committee 3.3 Telematics: Control via Communication Networks
Share

“Sustainable” control of offshore wind turbines

The motivation for this issue comes from a real need to have an open discussion about the challenges of control for very demanding systems, such as wind turbine installations, requiring the so-called “sustainability” features. It represents the characteristic to tolerate possible malfunctions affecting the system and, at the same time, the capability to continue working while maintaining power conversion efficiency. Sustainable control has  begun to stimulate research and development in a wide range of industrial communities particularly for those systems demanding a high degree of reliability and availability. The system should be able to maintain specified operable and committable conditions, and at the same time should avoid expensive maintenance works. For offshore wind farms a clear conflict exists between ensuring a high degree of availability and reducing costly maintenance.

Renewable energy can be produced from a wide variety of sources including wind, solar, hydro, tidal, geothermal, and biomass. By using renewables in a more efficient way to meet its energy needs, the EU lowers its dependence on imported fossil fuels and makes its energy production more sustainable and effective. The renewable energy industry also drives technological innovation and employment across Europe, as highlighted for the wind power conversion systems.

2020 renewable energy targets are settled. The EU’s Renewable Energy Directive sets a binding target of 20% final energy consumption from renewable sources by 2020. To achieve this, EU countries have committed to reaching their own national renewables targets ranging from 10% in Malta to 49% in Sweden. They are also each required to have at least 10% of their transport fuels come from renewable sources by 2020 [1]. All EU countries have adopted national renewable energy action plans showing what actions they intend to take to meet their renewables targets. These plans include sectorial targets for electricity, heating and cooling, and transport; planned policy measures; the different mix of renewables technologies they expect to employ; and the planned use of cooperation mechanisms.

A new target for 2030 is fixed. Renewables will continue to play a key role in helping the EU meet its energy needs beyond 2020. EU countries have already agreed on a new renewable energy target of at least 27% of final energy consumption in the EU as a whole by 2030. This target is part of the EU’s energy and climate goals.

Support schemes for renewables are available, which drive the technological innovation and employment in this framework. Horizon 2020 is the biggest EU Research and Innovation programme ever with nearly €80 billion of funding available over 7 years (2014 to 2020) – in addition to the private investment that this money will attract. It promises more breakthroughs, discoveries and world-firsts by taking great ideas from the lab to the market. Horizon 2020 is the financial instrument implementing the Innovation Union, a Europe 2020 flagship initiative aimed at securing Europe’s global competitiveness [1].

By coupling research and innovation, Horizon 2020 is helping to achieve this with its emphasis on excellent science, industrial leadership and tackling societal challenges. The goal is to ensure Europe produces world-class science, removes barriers to innovation and makes it easier for the public and private sectors to work together in delivering innovation.

Wind energy is perhaps the most advanced of the ‘new’ renewable energy technologies, but there is still much work to be done. Assessments of the research and technology developments and impacts have been highlighted by recent proposals within the Horizon 2020, with key benefits from both the scientific and industrial points of view.

Wind energy can be considered as a fast–developing multidisciplinary field consisting of several branches of engineering sciences. The National Renewable Energy Laboratory estimated a growth rate of the wind energy installed capacity of about 30% from 2001 to 2006, and even with a faster rate up to 2014.

The global wind power installations are 369,6 GW in 2014, with an expected growth to 415.7 GW by the end of 2015. After 2009, more than 50% of new wind power resources were increased outside of the original markets of Europe and U.S., mainly motivated by the market growth in China, which now has 101,424 MW of wind power installed. Several other countries have obtained quite high levels of stationary wind power production, with rates from 9% to 39%, such as in Denmark, Portugal, Spain, France, Ireland, Germany, Ireland, and Sweden in 2015. From 2009, 83 countries around the world are exploiting wind energy on a commercial basis, as wind power is considered as a renewable, sustainable and green solution for energy harvesting. Note however that, even if the U.S. now achieves less than 2% of its required electrical energy from wind, the most recent National Renewable Energy Laboratory’s report states that the U.S. will increase it up to 30% by the year 2030. Note also that, even if the fast growth of the wind turbine installed capacity of wind turbines in recent years, multidisciplinary engineering and science challenges still exist. Moreover, wind turbine installations must guarantee both power capture and economical advantages, thus motivating the wind turbine dramatic growth [1].

Industrial wind turbines have large rotors and flexible load–carrying structures that operate in uncertain, noisy and harsh environments, thus motivating challenging cases for advanced control solutions [2].  Advanced controllers can be able to achieve the required goal of decreasing the wind energy cost by increasing the capture efficacy; at the same time they should reduce the structural loads, thus increasing the lifetimes of the components and turbine structures.

Although wind turbines can be developed in both vertical–axis and horizontal–axis configurations, the industrial and technological interest focusses on horizontal–axis wind turbines, which represent the most commonly solutions today in the produced large–scale installations. Horizontal–axis wind turbines have the advantage that the rotor is placed atop a tall tower, with the advantage of larger wind speeds that the ground. Moreover, they can include pitchable blades (i.e. they can be oriented with respect to the wind direction) in order to improve the power capture, the structural performance, and the overall system stability. On the other hand, vertical–axis wind turbines are more common for smaller installations. Note that proper wind turbine models are usually oriented to the design of suitable control strategies that are more effective for large rotor wind turbines. Therefore, the most recent research focus considers wind turbines with capacities of more than 10 MW [3].

Another important issue derives from the increasing complexity of wind turbines, which gives rise to more strict requirements in terms of safety, reliability and availability. In fact, due to the increased system complexity and redundancy, large wind turbines are prone to unexpected malfunctions or alterations of the nominal working conditions. Many of these anomalies, even if not critical, often lead to turbine shutdowns, again for safety reasons. Especially in offshore wind turbines, this may result in a substantially reduced availability, because rough weather conditions may prevent the prompt replacement of the damaged system parts. The need for reliability and availability that guarantees the continuous energy production thus requires sustainable control solutions [2].

These schemes should be able to keep the turbine in operation in the presence of anomalous situations, perhaps with reduced performance, while managing the maintenance operations. Apart from increasing availability and reducing turbine downtimes, sustainable control schemes might also obviate the need for more hardware redundancy, if virtual sensors could replace redundant hardware sensors. These schemes currently employed in wind turbines are typically on the level of the supervisory control, where commonly used strategies include sensor comparison, model comparison and thresholding tests. These strategies enable safe turbine operations, which involve shutdowns in case of critical situations, but they are not able to actively counteract anomalous working conditions. Therefore, recent research directions have been oriented to investigate these sustainable control strategies, which allow to obtain a system behaviour that is close to the nominal situation in presence of unpermitted deviations of any characteristic properties or system parameters from standard conditions (i.e. a fault). Moreover, these schemes should provide the reconstruction of the equivalent unknown input that represents the effect of a fault, thus achieving the so–called Fault Detection and Diagnosis tasks [3].

The need of advanced control solutions for these safety–critical and very demanding systems, motivated also the requirement of reliability, availability, and maintainability over the required power conversion efficiency. Therefore, these issues have begun to stimulate research and development of sustainable control (i.e. fault–tolerant control), in particular for wind turbine applications. Particularly important for offshore installations, Operation and Maintenance (O & M) services have to be minimised, since they represent one of the main factors of the energy cost. The capital cost, as well as the wind turbine foundation and installation determine the basic term in the cost of the produced energy, which constitute the energy “fixed cost”. The O & M represent a “variable cost” that can increase the energy cost up to about 30%. At the same time, industrial systems have become more complex and expensive, with less tolerance for performance degradation, productivity decrease and safety hazards. This leads also to an ever increasing requirement on reliability and safety of control systems subjected to process abnormalities and component faults [2, 3].

As a result, the Fault Detection and Diagnosis tasks, as well as the achievement of fault tolerant features for minimising possible performance degradation and avoiding dangerous situations are extremely important. With the advent of computerised control, communication networks and information techniques, it becomes possible to develop novel real–time monitoring and fault–tolerant design techniques for industrial processes, but this also brings challenges. Several works have been proposed recently on wind turbine Fault Detection and Diagnosis, and the sustainable (fault tolerant) control problem has been recently considered with reference to offshore wind turbine benchmarks, which motivated this issue [3, 4].

In this way, by enabling this clean renewable energy source to provide and reliably meet the world’s electricity needs, the tremendous challenge of solving the world’s energy requirements in the future will be finally enhanced. The wind resource available worldwide is large, and much of the world’s future electrical energy needs can be provided by wind energy alone if the technological barriers are overcome. The application of sustainable controls for wind energy systems is still in its infancy, and there are many fundamental and applied issues that can be addressed by the systems and control community to significantly improve the efficiency, operation, and lifetimes of wind turbines.

[1] Global Wind Energy Council. Wind Energy Statistics 2014. Report, 2014.

[2] Blanke, M.; Kinnaert, M.; Lunze, J.; Staroswiecki, M. Diagnosis and Fault–Tolerant Control; Springer–Verlag: Berlin, Germany, 2006.

[3] Simani S. Overview of Modelling and Advanced Control Strategies for Wind Turbine Systems. Energies, October 2015, 10, 12116–12141. (This article belongs to the Special Issue Wind Turbine 2015)

[4] Odgaard, P.F.; Stoustrup, J. A Benchmark Evaluation of Fault Tolerant Wind Turbine Control Concepts. IEEE Transactions on Control Systems Technology 2015, 23, 1221–1228.

Article provided by
Dr. Silvio Simani 
University of Ferrara
IFAC Technical Committee 6.4 – Safeprocess
Share

A message from the IFAC President: Janan Zaytoon

Dear IFAC Social media followers, Dear Friends and Colleagues,

Let me take this opportunity to wish you all a Healthy and Prosperous New Year 2016.

IFAC has launched its social media channels in September 2015 as a platform to leverage brand awareness amongst its internal and external stakeholders. It is great to count you among IFAC social media followers and I wish to express my sincere thanks for your continued support and your important contribution to the activities of IFAC

I would like to express my heartfelt thanks to Jakob Stoustrup for the enormous energies they have put into launching and managing the social media project. We are also thankful to our partner Sven Uhlig, from StudioUHU, for assisting us in the development and the implementation of the IFAC social media platform.

We look forward to a successful growth of the IFAC social media platform and hope it will become a useful source of information exhange for the IFAC community. You are more than welcome to provide us with feedback and suggestions on how to improve IFAC activities and services.

With best wishes,

Janan Zaytoon
IFAC President

Share

Is there Pervasive Control out there?

“Control is everywhere”. This sentence has been used to promote the importance of Systems Engineering in high-level studies as well as being a propellant of market diversification of specialized equipment manufacturers. There is no doubt that this is true. Control is an essential piece of most aspects of our lives, but the original dream goes beyond.

Visionaries, scientists and writers, foresee a world strongly instrumented with sensors, actuators and computing resources, populated with software entities capable of anticipating the actions leading to a better performance of real-world entities. This is a very complex scenario proper of Artificial Intelligence research involving low-level, entity-level, global and ethical requirements and objectives. This also involves a global tradeoff among systems having a strongly non-linear behavior. Realizing this vision constitutes a major challenge.

As in the first large revolution of Systems Engineering that moved it from the analog to the digital scenario, we face now a new revolution also related to the coupling of Control and Computing. Some of the computing technologies involved in this qualitative change are:

  • Pervasive communication and computing
  • Cloud computing and services
  • BigData
  • Deep learning
  • Agents and assistants technology

Some years ago, the experience of moving from the context of “Wireless Sensor Networks” to the “Wireless Sensor and Actuators Networks” has shown us that closing control loops in a complex and distributed computing environment unveils new challenges. While communication delays of sensor information can be measured, and sometimes compensated into the control software, acting information traveling in a communication infrastructure is less robust. This communication must be done in the context of a contract of Quality of Service. To ensure a reliable performance, local actuators (“local” means “wired to the analog component”) must have computing capabilities to deal with this “contract” and algorithms to manage contract violations and all other exceptional situations.

In a similar way, looking at a big scale Pervasive Control reveals the importance of aspects of the design related to the reliability and confidence as well as rethinking the role of the human component. As a previous step to face the Pervasive Control dream, there must be solid developments on at least the following technologies:

  • Mixed Criticality management and execution platforms.
  • Cyber-security
  • Performance related to resources availability. Graceful degradation.
  • Stability guarantee.

The capabilities of computer and control technologies are now stronger like never before. Nevertheless it is not as simple as “putting it together”, there is a lot of work to do in the gap. Surely this is a source of opportunities for research and business.

Article provided by
José E. Simo Ten (LinkedIn profile)
Universitat Politécnica de Valéncia, Valencia, Spain
IFAC Technical Committee  3.1. Computers for Control
Share

Translate or die!

They are everywhere. Some 100 trillion inhabit the earth, comprising half of the animal mass on it. Have you guessed what I am talking about yet? See the following articles in the New York Times, NY Times Magazine, Scientific American, Nature, Science, or this TED talk to refresh your memory. Now the human microbiome has been associated with almost every disease possible, microbes in the gut have even been associated with brain diseases [1]. The study of these little things is kind of a big deal.

What is a normal human microbiome?
The most important developments in the human microbiome have come via the analysis of large cohorts across body sites (gut, mouth, vagina, skin, etc) [2] and longitudinal studies where fecal samples have been collected on a daily scale [3,4]. What we know from these studies is that the abundance and kinds of microbes are body site specific. See the figure just below that illustrates this point [2, Figure 1c].


body_specific
In the figure above 4,788 specimens from 242 adults are projected into the first two principle coordinates (relative abundance of microbes at genus level). The different body sites are color coded, and it is clear that the specimens cluster according to body site and not by subject. We have also learned that microbial abundances are fairly stable for each site and for each subject (I will discuss this in more detail shortly). Before getting to the dynamics and estimation part we need a story so as to understand the translational implications of a better understanding of the human microbiome.

Fecal Microbial Transplantation
This story begins with Jane coming to the hospital because of an infection in her leg. To kill the infection she is given broad spectrum antibiotics. After a few days the infection is gone, but Jane now has severe diarrhea. The antibiotics have killed some of the healthy bacteria in her gut and now Jane has an over abundance of Clostridium difficile, i.e. she has Clostridium Difficile Infection (CDI). Ironically the most often prescribed treatment for CDI is another antibiotic. This targeted antibiotic always works in temporarily reducing the abundance C. difficile, but the CDI is recurrent. So with no other options Jane asks her brother John for a fecal sample. This fecal sample is prepared and transplanted into Jane (Fecal Microbial Transplantation (FMT)). As if a miracle has occurred Jane is healthy again. This kind of story is becoming commonplace in hospitals around the country now.

What happens in terms of the abundance of the microbes post-FMT is quite amazing. Consider the figure below [5, Figure 1]. What is being illustrated is several subjects stool samples pre-FMT, circled in red, and the trajectories (post-FMT) are shown to rapidly converge to the green circle (which also contains the host sample), overlaid on top of the samples from the 242 healthy adults above. A movie of these trajectories can be downloaded here. While the post-FMT samples do deviate slightly from the host sample in terms of relative abundance of microbes the stool samples remain within the range of what is considered a healthy stool sample. Stated simply what we are observing is the patient’s gut microbiome reconstituted and remaining in an abundance profile similar to that of the donor. It is quite amazing.

body_specific_2

Is the microbiome stable?
What we just saw above was that the post FMT stool samples remained similar to the host after transplantation. So then one natural question arises: How stable is the human microbiome? Even biologist recognize that this is an important subject as is evidenced by the figure below which appeared in a recent review article in Science [6, Figure 1]

Stable

While I am delighted to see that the notion of stability has been recognized as an important issue in the human microbiome, I have felt a push back from the microbiome research community to explore what this actually implies. There is also a misunderstanding of what the word stable means. This is simply an ignorance issue and as control engineers/theorist we should just simply educate those in this field. Consider the figure below [7, Figure 3A] that shows 15 days of samples (shown in yellow) taken from the daily 1 year gut microbiome study in [3], and projected onto the principle coordinates from a previous study [8,9]. Ignore the red, green, and blue dots and focus on the trajectories of the yellow dots with gray lines following the day to day changes in the stool samples. The authors of [7] wanted to highlight the fact that the samples can deviate from steady state in almost all directions. The authors unfortunately draw the conclusion that this is a visualization of instability in the gut microbiome. The original figure from the study in [7] is shown on the left and my annotated figure is on the right. I would like to illustrate that the two trajectories after deviation return to the “steady region’’. This is not instability, but the very definition of stability. One could even argue we are observing asymptotic like stability, i.e. in the absence of disturbances all trajectories converge to a single fixed point. Of course there will always be disturbances in biological systems. Could this line of reasoning help to explain the success of FMT? I think you can begin to see where those working in the area of dynamics and control might be needed in this emerging field.

Trajectory

How do we model the microbiome?
The most common way that microbes interact is through the consumption of nutrients and the synthesis of products (not necessarily through the direct consumption of each other) [10]. Therefore, a detailed model would contain states for both the abundance of microbes and the abundance of the metabolites they consume and synthesize. At the finest level of modelling all host and microbe metabolic pathways would need to be mapped. We currently do not posses the technology or sufficiently rich data to perform this rigorously. At this point in our understanding of microbial dynamics it is more common to think of a reduced order model that only accounts for the abundances of the microbes.

The two most popular (reduced order) models are Generalized Lotka-Volterra (GLV) dynamics over a network and Bayesian networks. The first is deterministic (most common as well) and the second, probabilistic. I will focus on the first one here, but a similar discussion could follow with a probabilistic mind set as well, just a lot more capital E’s.

Let \(x_i\) be the abundance of microbe \(i\) for subject \(X\) at a specific location on/in the body. Let’s assume for now we are concerned only with the gut. Then the GLV model for \(n\) microbes interacting in the gut of subject \(X\) is described by the following differential equation

\[\dot{x}_i=r_ix_i+\sum_{j=1}^na_{ij}x_j\] where \(i=1,2,…,n\). Collecting the abundances of the microbes into a column vector \(x=[x_1,\ x_2,\ \ldots ,\ x_n]^T\) the dynamics can be compactly written as \(\dot{x}=\text{diag}(r)x+\text{diag}(x)Ax,\)
where \(r\) is a column vector of the \(r_i\) and \([A]_{ij}:=a_{ij}\). We will refer to \(A\) as the microbial interaction matrix, or network. In this modeling paradigm \(r\) captures the linear growth or death terms and the matrix \(A\) captures the causal interactions amongst species. Thus the entry \(a_{ij}\) is determined by thinking of the average affect that species \(j\) has on species \(i\) by determining what species \(j\) generates as products and what both species \(i\) and \(j\) consume as nutrients. For instance, if species \(j\) produces products that species \(i\) consumes as nutrients and they do not compete for any other nutrients then the entry of \(a_{ij}\) would be positive. I mentioned earlier that we dot not fully understand the microbial and metabolic interactions well enough to have a global bottom up model. Do we have sufficient data to learn the interactions in the simplified GLV model? We will discuss this in more detail shortly. Note in this blog the term “species’’ is in the general context of ecology, i.e. a set of organisms adapted to a particular set of resources in the environment, unless we state that we are specifically discussing the taxonomic rank “species’’.

Lets now consider the gut of a different individual, subject \(Y\), and assume that the dynamics are as follows

\[\dot y = \text{diag}(\bar r) y + diag(y) \bar A y.\]
Notice that I have written the dynamics for both subjects with different variables, \((r,A)\) for subject \(X\) and \((\bar{r},\bar{A})\) for subject \(Y\). Is it possible that for two otherwise healthy individuals with similar diet \(A=\bar{A}\), and \(r=\bar{r}\). Recent attempts to infer the interaction matrices for two individuals illustrates some short comings in the literature and another opportunity for those working in system identification and machine learning to have an immediate impact in this field.

Consider the networks just below illustrating a subset of the interaction matrix for two subjects gut microbes [11, Figure 6]. This study concludes that the network of causal interactions between microbes are not the same for healthy individuals. There are many issues with this study however, illustrating the need for those working in the area of system identification to collaborate with those working on the human microbiome. I do not want to disparage the authors of [11], my only intention here is point out mistakes in the analysis that a control engineer might have noticed.

Networks

The authors correctly recognize that the data was not sufficiently rich (1 year of daily samples with not very much excitation) to accurately capture all species interactions (on the order of 100 at the taxonomic rank of species). Thus, the authors concluded to perform system identification using only the 14 most abundant species, and then showed that the two networks are different. One will realize however that the throwing away of states is problematic, and not the appropriate way to overcome a lack of sufficient richness. My own work is in fact pointing to the opposite scenario, otherwise healthy individuals have the same underlying interaction network, but I will withhold claiming that until I have more proof.

Our help is also needed in helping the physicians design their trials so that samples can be obtained with as much information as possible. There is also the technical issue of microbial samples usually being normalized (we only know relative abundances with confidence). System identification in biological networks is often referred to as network reconstruction, and this entire sub area of biological research is in very serious need of our help as this scathing comment in nature biotechnology points out.

Lots of open questions

• Are some body sites more stable than others?
• How do we rigorously demonstrate this stability?
• Are the networks of two healthy individuals similar?
• How do different diseases affect that network?
• Why do FMTs work?
• Are there other modelling approaches that can be used to understand microbial dynamics?
• What are the fundamental limitations for network reconstruction when dealing with relative abundances?
• Finally, how do we control the microbiome?

Conclusions
Aircraft control has been one of the cornerstone applications for control for more than 50 years. It is time however to find new areas for research. I hope this has inspired you to consider some translational areas such as the human microbiome as a possible research area for the application of everything you have learned in dynamics, control, and system identification.

I would like to acknowledge my collaborators Yang-Yu Liu and Amir Bashan, as well as conversations I have had with Aimee Milliken, Eric Alm, Curtis Huttenhower, and Rob Knight.

Biblipgraphy

  1. Mayer, Emeran A., et al. “Gut microbes and the brain: paradigm shift in neuroscience.” The Journal of Neuroscience 34.46 (2014): 15490-15496.
  2. Human Microbiome Project Consortium. “Structure, function and diversity of the healthy human microbiome.” Nature 486.7402 (2012): 207-214.
  3. Caporaso, J. Gregory, et al. “Moving pictures of the human microbiome.”Genome Biol 12.5 (2011): R50.
  4. David, Lawrence A., et al. “Host lifestyle affects human microbiota on daily timescales.” Genome Biol 15.7 (2014): R89.
  5. Weingarden, Alexa, et al. “Dynamic changes in short-and long-term bacterial composition following fecal microbiota transplantation for recurrent Clostridium difficile infection.” Microbiome 3.1 (2015): 10.
  6. Costello, Elizabeth K., et al. “The application of ecological theory toward an understanding of the human microbiome.” Science 336.6086 (2012): 1255-1262.
  7. Knights, Dan, et al. “Rethinking “enterotypes”.” Cell host & microbe 16.4 (2014): 433-437.
  8. Arumugam, Manimozhiyan, et al. “Enterotypes of the human gut microbiome.”nature 473.7346 (2011): 174-180.
  9. Arumugam, Manimozhiyan, et al. Addendum “Enterotypes of the human gut microbiome.”nature 506.7489 (2014): 516–516.
  10. Levy, Roie, and Elhanan Borenstein. “Metabolic modeling of species interaction in the human microbiome elucidates community-level assembly rules.”Proceedings of the National Academy of Sciences 110.31 (2013): 12804-12809
  11. Fisher, Charles K., and Pankaj Mehta. “Identifying Keystone Species in the Human Gut Microbiome from Metagenomic Timeseries Using Sparse Linear Regression.” PLoS ONE 9.7 (2014).
Article provided by
Travis E. Gibson (@gibsonnews)
Harvard Medical School, USA
IFAC Technical Committee 1.2. Adaptive and Learning Systems
Share

Technology paving the way for the next generation of process control

The journey of modern process control started when the necessity of ensuring process safety and regularity arose in the process industry. Back in 1950s was the time when process control started to be acknowledged as yet another inevitable module of chemical and petrochemical industries. In those days PID (proportional integral derivative) controllers, as perhaps the only control concept, were implemented pneumatically. This did not mean that digital control was not examined. In fact, in late 1950s process control industry had experimented Digital Control and the conclusion they arrived at was: “it is not worth it!”. While this was a true conclusion at the time – due to being expensive, complex, and not being able to generate enough economic profits – we all know how this is changed today.

While in early years control practitioners came to the conclusion that only two layers of control – a lower layer for regularity control and an upper layer for steady-state optimization – were required, with the processes becoming more complex over the years, the control systems had to catch up. By 1970s, Advanced Process Control (APC) concepts were introduced aiming at enhancing process efficiency and reducing variability in product quality. Owing to the growing economic competitions in those years, it became evident, more than ever, that profitability is directly related to the product quality. In addition to the trade offs between process productivity and product quality, energy efficiency and environmental footprint of chemical processes also became of significant importance. These considerations were translated into much more complex processes, which required increasingly complex control systems.

It was in this era that Model Predictive Control (MPC) was born in the chemical industry in order to facilitate more effective control of processes, and was later adopted by academia to further develop its theoretical foundations. MPC controllers rely on a mathematical model of the process to predict the process behavior over a future time horizon. This enables optimization of the process behavior in terms of desired process performance. In simple words, with MPC, we are able to predict the future behavior of the process and act according to anticipated deviations from the target control objectives. MPC has brought a substantial improvement to our process control capabilities in meeting conflicting control objectives in the process industry. It is fair to say that the more accurate the process model is, the more effective MPC would work. For this reason, there has also been an increased emphasis on modeling aspects of chemical processes over the years. Let us not forget that every mathematical model is an abstract and inaccurate representation of reality. While models can be made overly complicated, the computational cost of the complex models may make them less suitable for on-line control applications. In addition to plant-model mismatch, chemical processes exhibit considerable uncertainties and disturbances. This has led to emergence of robust MPC concepts since late 1980s. The idea is to capture as much of the uncertainties and disturbances as possible to robustify the designed control inputs to process perturbations.

In recent years, advances in novel process designs and manufacturing practices have brought about new challenges and opportunities for the process control community. One such is around the importance of batch processes that are commonly used for low volume and high value-added manufacturing. Traditionally, process control was implemented mainly on continuous-flow processes. Today, however, many important processes run in batch mode. Such processes can be found in, e.g., specialty chemical, pharmaceutical, food, and biotechnology industries.
The inherent dynamic nature of batch processes has motivated the process control community to bring in new developments.

Another recent trend is the growing importance of the notion of transient process modes. In intensified processes and miniaturized chemical systems as well as advanced manufacturing systems, it is more important than ever to be able to control the processes during their transient modes in light of economic considerations. Looking into the future, the major domains for the process control practitioners and academics are seen to be big data in the process industries, advanced energy systems, and health-care applications. It is now up to the process control community to embark on new endeavors to best play their role in addressing the societal needs of the current era.
 

Article provided by
Ali Mesbah
University of California, Berkeley, USA
IFAC Technical Committee 6.1. Chemical Process Control
Share

Can automation and automatic control be a game?

Video games are fascinating and probably inescapable, attracting kids to consoles as bees to honey. In one way or another, video games have impacted many kids’ lives during the last decades, and presently, playing computer games is a favorite leisure activity for most young (and not so young) people. Whilst recreation activities, extremely immersive and addictive are a natural and understandable worry for parents and educators, computer games can teach young children to read and count, as well as help middle school students to learn about science and technology.

Moreover, computer games are also enabling the creation of “synthetic” or “simulated” environments from where scientific research, education and training, career development and life-long learning are possible and effective. Thus, computer games technology is getting an increasing importance in the development of valuable professional tools for scientists, engineers and educators [1].

It is exactly what the CReSTIC lab from the University of Reims Champagne-Ardenne and Real Games, a Portuguese company, performed in a 3-year R&D project (2011-2014) partially founded by the French Education Ministry, by bringing a complete “virtual” house, called HOME I/O, into the classroom [2, 3]. HOME I/O is real time simulation software of a smart house and its surrounding environment, designed to cover a wide range of curriculum targets within Science, Technology, Engineering and Math, from secondary schools to universities.

Home automation is a growing trend that is becoming more and more popular and affordable every day. It improves our safety, security, convenience, comfort and it allows us to use energy in a more responsible and efficient way. With HOME I/O, kids, students and learners can raise awareness about energy efficiency, change behaviours and learn about new technologies. HOME I/O allows users to control 174 interactive devices (light, switch, heater, shutters, gate, garage door, alarm ) typically found in a real house, through a built-in Home Automation Console. HOME I/O can also be connected and controlled by external technologies, both hardware (PLC, micro controller ) and software (soft PLC, MATLAB, LabView ).

With state of the art in 3D graphics and FPS video game approach (first-person shooter), HOME I/O offers a unique learning experience with immersive and motivating hands-on activities for several educational and vocational areas. HOME I/O is a great tool for teachers to explore and develop engaging activities in a project-based learning methodology like STEM, where students analyze situations, search for answers and provide solutions.

Yes, learning automation and automatic control can be a game.

[1] :A. Magalhaes, B. Riera et B. Vigario

When Control Education Is the Name of the Game
Computer Games as Educational and Management Tools: Uses and Approaches, IGI Global, pp 185-205, 2012. 10.4018/978

[2] www.realgames.pt

[3] www.teachathomeio.com (French)

 

Article provided by
Bernard Riera
Unversité de Reims, Champagne-Ardenne
IFAC Technical Committee 4.5. Human Machine Systems 
Share

Water, a global and human issue for tomorrow, a necessary and vital automation

Water is essential to our planet, useful and used by all living species, mainly freshwater. However, the latter is a scarce resource and therefore its control is an essential and necessary issue.

Freshwater management can be done in different ways and at different scales:

  • Management of waterways for shipping
  • Management of irrigation canals for agriculture
  • Management and optimization of water for industry
  • Treatment of wastewater
  • Management of flows upstream (catchment)
  • Management of river systems
  • Simulation of hydrological changes
  • Awareness of people to the problem and the scarcity of this resource
  • Inequalities in access to safe drinking water between regions and countries

While most of these points are treated for decades, the fact remains that water losses are greater than 50% due to aging or no existing facilities, but mostly due to the fact that it is a very complex problem and the ins and outs are difficult to control.

For proof, the calculations of maritime flows for weather predictions are so far an open issue as consideration of the full model is difficult both theoretically and numerically. Many simplifying assumptions are made and make sometimes obsolete the result, as we can see every day.

However, improvements are done every day and the predictions are better each time.

There was another 50 years, a lot of water networks were managed manually, involving huge losses, but also a lot of damage.

For over 10 years, new techniques are used and created to address more accurately the various above problems and reduce water losses and waste:

  • Creation and automation of water systems: automation of the free surface water lines and / or under pressure (autonomous and automatic management of locks on waterways, valves on the rivers to prevent floods and floods, improvement Traffic in sewers …)
  • Improved handling of water, increasing yields, new processes of sedimentation and degradation of waste
  • Predictions and simulations of weather
  • Predictions and simulations of watersheds, and deltas, in one case to set up a hard structure to channel and direct the flow, in the second case to avoid floods and all phenomena degrading the environment (flora and fauna , equipment)
  • Improved robustness of the controllers
  • Detection and diagnosis of the defaults
  • Concatenation of different processes along the water cycle

All this is a necessary approach and Hydraulic Research is a major asset. Lower losses in closed waters networks is particularly significant but cannot be applied to all cases.

Decontamination process control of wastewater is also a promising route which all the potential is not exploited. This requires a better understanding of the phenomena involved (simulation) and control reactions. All this can nevertheless not be decoupled with a strong awareness of each of us that we are wasteful and inequitable access to this resource between human beings themselves and the different species.

Article provided by
Valérie Dos Santos Martins
Laboratoire d’Automatique et de Génie des Procédés, Université Claude Bernard Lyone 
IFAC Technical Committee 2.6. Distributed Parameter Systems
Share
Olderposts Newerposts

Copyright © 2017 IFAC blog page

Theme by Anders NorenUp ↑