IFAC blog page

Category: Automated vehicles

Autonomous road vehicles: a modern challenge for robust control

The idea of autonomous cars has been in the air as early as the 1920s, but the first prototypes of truly autonomous (albeit limited in performance) road vehicles appeared only in the 1980s. Since then, several companies, e.g.,Mercedes, Nissan and Tesla, as well as many universities and research centres all over the world, have pursued the dream of self-driving cars. More recently, a few ad-hoc competitions and the increasing interest of some big tech companies have rapidly accelerated the research in this area and helped the development of advanced sensors and algorithms.
As an example, consider that Google maintains (and publishes) monthly reports including the experimental tests and the most recent advances on its driverless car.

The reasons why such a technology is not yet on the market are many and varied. From a scientific point of view, autonomous road vehicles pose two major challenges:

  • a communication challenge: how to interact with the surrounding environment, by taking all safety, technological and legal constraints into account?
  • a vehicle dynamics challenge: the car must be able to follow a prescribed trajectory in any road condition. On the one hand, the interaction with the environment mainly concerns sensing, self-adaptation to time-varying conditions and information exchange with other vehicles to optimize some utility functions (the so-called “internet of vehicles” – IoV).

These issues undoubtedly represent novel problems for the scientific community and have been extensively treated in the past few years. On the other hand, control of vehicle dynamics may seem a less innovative challenge, since electronic devices like ESP or ABS are already ubiquitous in cars.

Within this framework, robust control, namely the science of designing feedback controllers by taking also a measure of the uncertainty into account, has played a central role. However, by taking a deeper look at the problem, it becomes evident that the main vehicle dynamics issues for autonomous cars are more complex than those concerning human-driven cars and the standard approaches may be no longer effective.

Actually, path planning and tracking is a widely studied topic in robotics, aerospace and other mechatronics applications, but it is certainly novel for road vehicles. In fact, in existing cars, even the most cutting-edge technology devices are dedicated to adjust vehicle speed or acceleration in order to increase safety and performance, whereas the trajectory tracking task is always fully left to the driver (except for few examples, like automatic parking systems).

Nonetheless, most of vehicle dynamics problems arise from the fact that the highly nonlinear road- tire characteristics is unknown and unmeasurable with the existing (cost-affordable) sensors. Therefore, keeping the driver inside an outer (path tracking) control loop represents a great advantage in that she/he can manually handle the vehicle in critical conditions (at least to a certain extent) and make the overall system robust to road and tire changes. This is obviously not the case for autonomous vehicles.

Hence, it seems that standard robust control for braking, traction or steering dynamics could turn out to be “not robust enough” for path tracking in autonomous vehicles, because one can no longer rely upon the intrinsic level of robustness provided by the driver feedback loop. In city and highway driving, this fact may not represent a problem, because the sideslip angles characterizing the majority of manoeuvres are low and easily controllable [8]. However, in the remaining cases (e.g., during sudden manoeuvres for obstacle avoidance), a good robust controller for path tracking, exploiting the most recent developments in the field, could really be decisive to save human lives in road accidents.

It can be concluded that still a few important questions need an answer by robust control people, e.g.:

  • “can we provide a sufficient level of robustness with respect to all roads and tire conditions, without decreasing too much the performance?”
  • “are we able to replicate the response of an expert driver to a sudden change of external conditions?”
  • “how can we exploit at best the information coming from the additional sensors usually not available on-board (e.g., cameras, sonars…)?”

but also many others.

IEEE experts estimate that up to 75% of all vehicles will be autonomous by 2040. This scenario will be accompanied by significant cost savings associated with human lives, time and energy. As control scientists and engineers, it really seems we can play a leading role towards this important social and economic leap.

Download the article
PDF document  with references can be downloaded here (150Kb)

 

Article provided by
Simone Formentin, PhD, Assistant Professor
IFAC Technical Committee 2.5: Robust Control 

Controlling an autonomous refuse handling system

With potential to increase both safety and quality aspects in our daily use of and interaction with vehicles, autonomous vehicles are currently a major trend in the automotive industry. The initial focus up to now has been on autonomous driving of passenger cars, like platooning and queueing assistance etc. There have also been initial tests with systems of construction equipment that perform autonomous asphalt spreading and gravel loading etc. A further step to extend and improve the service we experience today, might be to combine vehicles and peripheral support devices to join autonomous driving with autonomous loading and unloading of goods. In the future, an autonomous electrified distribution truck might for example work together with support devices to enable autonomous loading and unloading of goods to and from our doorstep just hours after we ordered a pick-up or delivery service online.

The Robot based Autonomous Refuse handling (ROAR) project is a first attempt to demonstrate such an autonomous combination. An operator driven refuse collection truck is equipped with autonomous support devices to fetch, empty, and put back refuse bins in a predefined area.

The physical demonstrator in the ROAR project constitutes one truck and four support devices. When the truck has stopped in an area, a camera-equipped quadcopter is launched from the truck roof to search for bins and store their positions in the system. As bin positions become available in the system, an autonomously moving robot is sent out from the truck to fetch the first bin. The system’s path planner calculates the path to the bin as an array of waypoints. The planner calculates paths based on a pre-existing map of the area. Upon following the waypoints, the robot is intelligent enough to avoid obstacles that are not on the map. To accomplish this detection, the robot is equipped with a LiDAR and ultrasonic sensors.

After reaching the last waypoint, the robot changes from navigation to pick-up mode. By exploiting the LiDAR and a front facing camera, the exact position and orientation of the bin can be detected. The robot aligns itself so that the bin can be picked up.
After the pick-up, the planner provides the robot with a new path back to the truck. After the last waypoint, the robot aligns with the lift at the rear of the truck. The lift is set at a pre-defined angle, so that the robot can move up to the lift and hook the bin onto it. During the emptying of the bin, the lift system monitors the area around the lift with a camera to assure that no person is in the way for the lift. If so, the lift movement is paused until the area is clear.

An emptied bin is picked up by the robot and returned to its initial position, once again based on a path from the planner. When reaching the initial bin position, the bin is put down. The robot can thereafter move to the next bin to be emptied, and the emptying procedure is repeated.

When there are no more bins to empty, the robot moves back to the truck and aligns itself with the lift. Similar to a bin, the robot is hooked on to the lift and the overall procedure is completed. The truck can thus be started and be driven to the next area.
The coordination of the truck and the support devices is based on a discrete event system model. This model abstracts the overall emptying procedure into a finite number of states and transitions. The states capture distinguishable aspects of the system, such as for example the positions of the devices and empty/full states of the bins. The transitions model start and completion of the various operations that the devices can perform. All steps in the above description of the emptying procedure can be modeled by such operations.

The investment in the discrete event model carries a number of attractive properties. During the development phase, the model can be derived using formal methods. Verification as well as synthesis (iterative verification) is then employed to refine an initial model to satisfy specifications on the system.

Moreover, the development of the actual execution of an operation can be separated from the coordination of the operation. As an example, consider the operation modeling that the robot navigates along a path. From an execution point of view, the operation must assure that given a path the robot eventually ends up at the last waypoint without colliding with any obstacle. From a coordination point of view, the operation must only be enabled when there is a path present in the system and the robot is positioned close to the initial waypoint.

The model contains two types of operations; operations that model the nominal behavior, and operations that model foreseen non-nominal behavior. The recovery operations in the second group can for example describe what the system can do when the robot cannot find a bin at the end of a path, or how to re-hook an incorrectly placed bin on the truck lift.

The discrete event model can also be exploited to handle more severe recovery situations, after unforeseen errors. As part of the development, the restart states in the system are calculated from the model. Upon recovery to simplify the resynchronization between the control system and the physical system, the operator sets the active state of the control system to such a restart state and modifies the physical system accordingly. By recovering from a restart state, it is guaranteed that the system can eventually finish an ongoing emptying procedure.

The truck and the support devices are connected using the Robot Operating System (ROS). ROS is an operating system-like robotics middleware that among other things enables message-passing between components defined in the system. Two types of messages are used in the ROAR project. The first type is messages related to starting and completion of operations. An operation start message is triggered from a user interface and is translated into a method call in the support device executing that operation. Under nominal conditions, this support device will eventually respond with a message saying that the operation has been completed. Both messages will update the current active state of the control system.

The second type is messages related to transferring data. Data transfer can be both internally within the programs connected to a support device and externally between support devices. An example of external data transfer is a path that is created in the path planner and then transferred to the robot.

During execution, the discrete event model is hosted on a web server. Interaction with the model is facilitated by the server’s API. Operator interaction is accomplished through a web based user interface. By enabling a web-based interface an operator can access the model using any device connected to the system’s network. This can for example be a computer in the truck cabin or a touchpad strapped to the operator’s forearm.

At the other end, ROS is also connected to the API. As pointed out before, this connection enables that operations started by the operator through the user interface are translated into method calls in the appropriate support device. Completion of the operation execution is translated into a post-request in the API. This will update the discrete event model to capture that the operation has been completed.
The physical demonstrator in the ROAR project is limited to a single robot for the bin handling. A next step could be to include more bin handling robots. For the specific field of application with refuse handling, more bin handling robots could enable higher efficiency in the emptying procedure. Many robots might also permit that the noisy truck can be parked further away from the bins, and thus cause less disturbance where people live. Today this is to be avoided because a distant truck will force the operator to walk too long.

From a more general point of view, coordination of multiple autonomous devices is an open research question. The two extremes are that the coordination is either performed from one central unit to which all devices are connected, or that the devices are intelligent enough to solve the coordination internally among them in a distributed manner. The two major coordinating challenges to handle is distribution of tasks between the devices and distribution of space where the devices can operate. The overall goal is thus so accomplish all tasks in some optimal way assuring that no devices are physically blocked in the operating area.

The productification of this overall control and coordination between one truck and several autonomous support devices is an interesting challenge. Imagine a future scenario where a haulage contractor company orders a new system. The truck is perhaps ordered from company A, with heavy-duty equipment from company B. The equipment is complemented with support devices from company C and company D. To operate properly, the system should also use services from the cloud, provided by some companies E and F. To further add to the equation, it is likely that operators are also in the loop to cope with unforeseen situations, complex item handling and parts of the decision making.

All in all, this text has only cracked open the door for what will come after the autonomous driving of passenger cars that we see today. There are still many mountains to climb and standards to agree upon before other areas than “just” the driving becomes automated. The outcome of the ROAR project is thus only a small step on a long journey a head.

The ROAR project is initiated and lead by Volvo Group. Chalmers University of Technology, Mälardalen University. Pennsylvania State University take part in the project as being Preferred Academic Partners to Volvo Group. The intention from Volvo Group is that students through bachelor and master theses should perform most of the development.

Article provided by
Patrik Bergagård, PhD, ROAR Project Leader
Martin Fabian, Professor, Automation
IFAC Technical Committee 1.3: Discrete Event and Hybrid Systems

Copyright © 2017 IFAC blog page

Theme by Anders NorenUp ↑