What did we learn from the DARPA Robotics Challenge ?

by Chris Atkeson, CMU, Pittsburgh, USA

This talk surveys what we learned from the DARPA Robotics Challenge.
We will discuss in more detail how to make robots safe (lightweight and soft) and how to make them more aware (whole-body vision). 

Building Lifelike Humanoid (and Non-Humanoid) Characters

by Katsu Yamane, Disney Research, USA

Entertainment is one of many applications of humanoid robotics, but it is unique in the sense that the "task" here is to make people believe that they are not watching a robot, but rather a living character with personality and emotion. Pursuing speed, power, accuracy, or even efficiency often does not make much sense in such application. It therefore requires a completely different paradigm for hardware and software development from traditional robotics. In this talk, I will discuss three elements that I believe are important for entertainment robots: motion style, interaction, and hardware design. The first part of the talk introduces various human-to-robot motion retargeting techniques for creating stylistic and expressive motions of both humanoid and non-humanoid characters. In the second part, I will introduce hardware prototypes of soft robots developed with the goal of realizing safe direct physical interactions including hand-shaking and hugging. Finally, I will discuss the unique design constraints that entertainment robots face and present a few solutions to that problem.

Embodiment and Integrated Task and Motion Planning for Human-Centered Robots

by Luis Sentis, Austin Texas University

As new embedded systems and machine design methods are devised, the number of actuators and sensors on robots steadily increases. These new generation of robots are required to blend around with humans, achieve tasks quickly, and guarantee safety. One question that arises is what computational control and planning schemes are needed for such high dimensional systems. In this talk, I will first discuss motion planning methods to achieve agile, robust and versatile locomotion capabilities. In particular, I will discuss methods based on robustly tracking a set of non-periodic keyframe states. Formulating the invariance properties and distance metrics of phase-space planners I will describe how to achieve robust dynamic locomotion under external disturbances. I will then discuss the implementation of these approaches into highly dynamic biped robots and describe experimentation procedures to walk without supports. I will proceed to describe challenges on the design of high performance series elastic actuators. These type of actuators are ideal for human-robot interaction due to their passive compliant properties. I will present the design of the UT-SEA and how my laboratory leveraged it to design the NASA Valkyrie humanoid robot. Finally, I will discuss our activities on commercializing high-performance actuators and humanoid robots for the aerospace and service branches under the umbrella of Apptronik Systems Inc.

Towards dynamic multi-contact locomotion by combining robust balancing and walking

by Christian Ott, DLR, Germany

In this talk I will present our progress on achieving multi-contact locomotion with the humanoid robot TORO. During the last years we developed a passivity based framework for compliant multi-contact balancing. The required balancing forces are distributed to the available contact points and then realized by an underlying joint torque controller. Recently, we extended this framework in order to handle task hierarchies and trajectory tracking tasks, which are required for achieving more dynamic motions. So far these developments considered a fixed set of contact points and transitions between different contact configurations were executed in a quasi-static way. On the other hand handling contact transitions in a dynamic way is regularly performed in walking control. This motivates to combine the passivity based balancing approach with algorithms for gait generation and stabilization based on the concept of the divergent component of motion. In this talk I will present some initial experimental results in this direction. In the final part of the talk, I will also highlight some new developments towards highly dynamic and physically robust humanoids based on elastic actuation.

Multicontact humanoid technology for aircrat manufacturing

by Abderrahmane Kheddar, JRL (CNRS-AIST), LIRMM (CNRS) , France 

This talk sums up the COMANOID project stakes and current achievements in using multi-contact technology and humanoid robotics in Airbus Group aircraft manufacturing. COMANOID investigates the deployment of robotic solutions in well-identified Airbus airliner assembly operations that are laborious or tedious for human workers and for which access is impossible for wheeled or rail-ported robotic platforms. As a solution to these constraints a humanoid robot embedded with multi-contact motion technology is proposed to achieve the described tasks in real-use cases provided by Airbus Group. We will present the project and some of its goal and discuss difficulties inherent to innovation and some of the problems that need to be solved together with the gap that remain to overcome in achieving this challenging goal.


Making Hard Robots Actively Soft

by Gordon Cheng, TUM, Germany

Conversational robots are hard and unsafe for real physical interactions - and not soft. Although efforts with joint force/torque control have introduced a way to interact with robots, thus yielding a level of compliance to robots – which makes them a bit softer. In our latest works, we offer yet another level of compliance to robots, providing softness at the surface level – making them actively soft. In this talk, I will present our works through all the stages of creating this new level of surface soft controls.

Lexicographic Model Predictive Control of balance and walking

by Pierre-Brice Wieber, INRIA, France

Model Predictive Control is currently the most widely used approach to balance and walking control of biped robots, while Lexicographic Programming has been proposed as an efficient approach to controlling robots with multiple, possibly conflicting objectives. The combination of these two approaches allows to generate new, complex behaviours, e.g., for walking safely in dynamic, human crowds, or making a decision to use an additional contact for balance preservation. This combination faces however the typical problem with mathematical programming and logic reasoning, that the robot ends up doing exactly what you asked, only for you to realise this was not exactly what you wanted. The problem of proper modelling of control objectives appears then first and foremost.

Engineering humanoids that grasp, learn from human and experience, and perceive time

by Tamim Asfour, KIT, Germany

Engineering complete humanoid robots which are able to predict the consequences of actions and exploit the interaction with the world to extend their cognitive horizon remains a research grand challenge. The talk will first discuss our progress towards the engineering of humanoid robots that are able to perform complex grasping and manipulation tasks in kitchens, warehouses and disaster environments. To this end, we present a complete architecture for humanoid robots, which integrate sensorimotor control, memory system and symbolic planning to execute tasks in the real world by linking existing robot experience to new observations to supplement the robot's knowledge with missing information about, object-, action- as well as planning-relevant entities. The second part of the talk will discuss the parallelism between grasping and whole-body balancing. Inspired by the idea that a stable whole-body configuration of a humanoid robot can be seen as a stable grasp on an object, we present taxonomy for whole-body poses, i.e. whole-body grasps, and its data-driven validation based on human motion capture data. Based on this taxonomy we show how whole-body multi-contact motions for a humanoid robot can be generated using natural language models learned from human motion data, where body support poses represent the words and motions the sentences. We promote time perception as a fundamental capacity of autonomous systems that plays a key role in the development of intelligence because time is important for encoding, recalling and exploiting experiences (knowing), for making plans to accomplish timely goals at certain moments (doing), for maintaining the identity of self over time despite changing contexts (being). We present finally early result on time processing mechanisms in robotics and show how semantic representation and hierarchical segmentation of human demonstrations based on spatio-temporal object interactions can be combined in order to facilitate the perception of action durations.

Anthropomorphic Robots for Disaster Response

by Sven Behnke, University of Bonn, Autonomous Intelligent Systems

Events like the Fukushima earth quake and tsunami demonstrate that today’s disaster-response robots lack capabilities in mobility and manipulation which would be needed for effective intervention, e.g. in a damaged nuclear power plant. To advance the state of the art in disaster-response robotics, DARPA held in 2015 its Robotics Challenge, where robots had to solve a variety of tasks relevant for affected man-made environments. For this challenge, our team developed the mobile manipulation robot Momaro. Using a hybrid wheeled-legged base, it is capable of omnidirectional driving, can adapt to the terrain, and overcomes obstacles by making steps. The robot is equipped with an anthropomorphic upper body for human-like manipulation. Using an immersive teleoperation interface, Momaro demonstrated several disaster-response tasks at the DARPA Robotics Challenge, where our team NimbRo came in as best European team. We also developed autonomous navigation and manipulation behaviors for Momaro, which were demonstrated at the DLR SpaceBot Camp 2015. These methods are further advanced in the ongoing H2020 project CENTAURO.    

HYDROïD: New Humanoid platform with Integrated Hydraulic Actuation technology

by Samer Alfayad, LISV, UVSQ, Velizy

HYDROïD (HYDraulic andROïD) is a full-size under development humanoid robot aims to contribute to improving our understanding of the phenomena of locomotion and manipulation of humans. Humanoid with hydraulic actuation are able to achieve hard and useful tasks and replace human in disaster environment.

In hydraulic actuation, pistons are used to produce motion. At least one hydraulic piston is implemented for each degree of freedom (DoF). The main difficulty is how to bring hydraulic energy to each piston. Hydraulic pipes are usually employed to drive hydraulic power from the control unit to pistons. Each piston needs two pipes, one to drive fluid “in” and the other to drive it “out”. In robotic applications, flexible hydraulic pipes are used to drive the oil in parallel to the joints. This solution suffers from three main disadvantages: I) The hydraulic pipes connected in parallel to joint will give a spring effect. This phenomenon has to be considered while developing the control law. II) The more the number of pipes, the more of the leakage probabilities. III) External pipes increase dramatically the robot size and decrease its anthropomorphic aspects.

To answer all these questions, a new “integrated hydraulic actuation” method was proposed and implemented on HYDROïD. The goal is to eliminate all external pipes and replace them with integrated hydraulic passages. Fluid paths is integrated internally through the mechanical structure and not externally through pipes. In other words, “arteries” and “veins” were built inside the HYDROïD body to drive hydraulic fluid like blood in human body.

This presentation will focus on two research areas. First, we will focus in two innovative hybrid mechanisms, each consisting of a rotating actuator carrying a parallel structure with two active DoF. The first type has been dedicated to the modules of the hip, shoulder and torso. The second type, actuation of parallel structure with cables was chosen for the ankle, the wrist and the neck modules.

The second part of this presentation will be dedicated to the actuation of the HYDROïD robot for which a new highly integrated actuator has been proposed. The actuation principle will be detailed and the benefits of the proposed solution will be shown. Very interesting performance of the realized prototype will be presented.

Robustness to Joint-Torque Tracking Errors in Task-Space Inverse Dynamics

by Andrea Del Prete, LAAS, CNRS, Toulouse

Task-Space Inverse Dynamics (TSID) is a well-known optimization-based technique for the control of highly-redundant mechanical systems, such as humanoid robots. One of its main flaws is that it does not take into account any of the uncertainties affecting these systems: poor torque tracking, sensor noises, delays and model uncertainties. As a consequence, the resulting control-state trajectories may be feasible for the ideal system, but not for the real one. We propose to improve the robustness of TSID by modeling uncertainties in the joint torques, either as Gaussian random variables or as bounded deterministic variables. Then we try to immunize the constraints of the system to any—or at least most—of the realizations of these uncertainties. When the resulting optimization problem is too computationally expensive for online control, we propose ways to approximate it that lead to resolution times below 1 ms. Extensive simulations in a realistic environment show that the proposed robust controllers greatly outperform the classic one, even when other unmodeled uncertainties affect the system (e.g. errors in the inertial parameters, delays in the velocity estimates).


Efficient Humanoid Navigation through Cluttered 3D Environments

by Maren Bennewitz, University of Bonn, Germany

A variety of approaches exist that tackle the problem of humanoid locomotion. Simple dynamic walking controllers do not guarantee collision-free steps, whereas most existing footstep planners are not capable of providing results in real time. Thus, these methods cannot be used to react to sudden changes in the environment. In this talk, we will present a real-time capable A* footstep planner that uses an adaptive 3D action set. When combined with our fast planar region segmentation framework, it allows us to find valid 3D footstep plans in complex scenes while reducing the risk of leading the robot into local minima. In addition, we will discuss a possible extension of the footstep planner by an object density prediction approach in order to avoid highly cluttered regions. In this way, we can further minimize the risk of leading the robot into local minima or situations that require high movement costs.

Using the momentum equations to plan and control multi-contact behaviors

by Ludovic Righetti, MPI, Germany

The past few years have seen a growing interest for the planning and control of multi-contact legged locomotion, especially using receding horizon optimal control. While progress is impressive, there remains important theoretical, computational and practical limitations to the achievement of such complex tasks. In this presentation, I will present some of our research results towards multi-contact legged locomotion. An important thread of our research concerns the use of the momentum equations of a legged robots to reason at the same time about motion and contact forces. In particular, I will present two recent algorithms to compute optimal contact forces and momentum. In the first one (Herzog et al., IROS 2016), a natural decomposition of the dynamics is proposed which leads to faster and better structured whole-body optimization algorithms. In the second one (Ponton et al. Humanoids 2016), a convex relaxation of the momentum dynamics allows to bring computation time in the realtime realm. I will use these two examples to argue that the structure of the optimal control problems related to multi-contact legged locomotion can and should be exploited to create more efficient solvers to allow receding horizon optimal control with manageable computational complexity.

Multi-stages multi-contacts motion generation.

by Nicolas Mansard, LAAS, CNRS, France

Optimal control provides an appealing theoretical framework to design, generate and control complex movements on any robotic system. However, the automatic generation of a trajectory from an optimal-control problem is often a costly mathematical process. As roboticists we then rely on two approximations to fasten the resolution. First we accept sub-optimality; then we design dedicated heuristics: simplified models (e.g. table cart in locomotion), warm-start assumptions (e.g. motion graphs in computer animation), problem reformulation based on non-trivial properties (e.g. flatness in quadcopters). In this presentation, we will discuss the trade-off between these choices, and how we can imagine to go beyond them. The presentation will be based on the context of multi-contacts locomotion.

Dynamics for interaction, learning for damage recovery

by Serena Ivaldi and Jean-Baptiste Mouret, INRIA, France

This talk will describe some of the recent results obtained at Inria Nancy - Grand Est (Larsen team). It be divided into two parts -- physical interaction with iCub and learning for damage recovery in legged robot. In the first part, we will explain how we exploit the multimodal sensory information (force/torque sensors, artificial skin, inertial sensors) for computing the external contact forces that occur on the whole-body of the robot (including when there are multiple contacts), and for estimating the joint torques. I will then describe the results of human-humanoid physical collaboration experiments regarding assembly with naive participants, where we study the combination of physical and social signals. In the second part, we will describe our ongoing efforts to design data-efficient trial-and-error algorithms that make it possible for a robot (in particular, a 6-legged robot) to recover from unforeseen damage conditions in a few minutes (a dozen of trials).

Advanced Behavior-based Control of Bipedal Locomotion

by Karsten Berns, University of Kaiserslautern, Germany

Due to the disadvantages of the conventional bipedal control approaches, many researchers studied the biological and biomechanical aspects of human locomotion, from which valuable findings can be transferred to the design of control approaches for bipedal walking robots. Among them, a Bio-Inspired Behavior-Based Bipedal Locomotion Control (B4LC) was developed by the Robotics Research Lab (RRLab) based on the concepts found in human motion control with the hypothesis that a control system similar to human one will enable a robot to perform locomotion in a likewise fashion. Examining the achievement of the B4LC approach, certain walking skills, e.g. walking initiation, cyclic walking, walking termination, curve walking and upslope walking, are designed based on biological and biomechanical studies. To enhance the robustness of bipedal walking in more challenging environments, a learning scheme is further implemented in the B4LC by using the Swarm Particle Optimization and the reinforcement learning method. The advanced locomotion skills are extended with proposed learning scheme, such as walking with different velocities, the rejection of external pushes, and stepping over large obstacles.


Visio-haptic control for physical human robot interaction - Recent works at the LIRMM IDH group

by Andrea Cherubini, University of Montpellier LIRMM-IDH

A fundamental requirement for the success of human-robot copresence is the capability to handle the physical contact between the two agents. In this sense, haptic feedback becomes mandatory to achieve safe and dependable interaction. Concurrently, the robot must be capable of recognizing the intentions of the human, to properly adapt its behaviour. A rich non-verbal communication channel for transferring this information is vision. Thus, combining vision with haptics (force/tact) becomes crucial. In this talk, I will present the recent works of the LIRMM IDH group, that exploit multimodal perception, to guarantee safe physical interaction between robots and humans.

The Yoyo-Man

by Jean-Paul Laumond, LAAS, CNRS, Toulouse

The talk reports on two results issued from a multidisciplinary research action tending to explore the motor synergies of anthropomorphic walking. By combining biomechanics, neurophysiology and robotics perspectives, it is intended to better understand the human locomotion with the ambition to better design bipedal robot architectures. The motivation of the research starts from the simple observation that humans may stumble when following a simple reflex-based locomotion on uneven terrains. The rationale combines two well established results in robotics and neuroscience respectively:

• Passive robot walkers, which are very efficient in terms of energy consumption, can be modelled by a simple rotating rimless wheel;

• Humans and animals stabilize their head when moving. The seminal hypothesis is then to consider a wheel equipped with a top-down control as a plausible model of bipedal walking. The two results presented in the paper comfort the hypothesis:

• From a motion capture data basis of twelve human walkers we first identify the center of mass (CoM) as a geometric center from which the motions of the feet are organized.

• After introducing a ground texture model that allows to quantify the stability performance of walker control schemes, we show how compass-like passive walkers are better controlled when equipped with a stabilized 2-degree-of-freedom moving mass on top of them. CoM and head then play complementary roles that define what we call the Yoyo-Man (joint work with J. Carpentier and M. Benallegue)

Loco-manipulation Learning Control for Humanoid Robots

by Dongheui Lee, TUM, Germany

In this talk I will introduce our recent progress on humanoid robot loco-manipulation learning control at Dynamic Human Robot Interaction Lab at TUM. Imitation learning is known to be an efficient way to learn new skills through human guidance, which can reduce time and cost to program the robot. Different teaching approaches for a human to teach a humanoid robot manipulation tasks have been proposed.  Incremental learning with heterogeneous teaching modalities (e.g. by visual and haptic coaching) can ease the teaching procedure of a complex humanoid robot. Finally I would like to discuss how to improve the stability of bipedal walking by iterative learning control. Empirical evaluation on several humanoid robotic systems (including NAO, Justin, TORO, IRT robot) illustrate the effectiveness and applicability to learn control of high-dimensional anthropomorphic robots.


Humanoid Robots in real environments

by Francesco Ferro, PAL Robotics, Spain

One of the biggest efforts engineers have to face when designing and programming humanoid robots is to prepare them for non-controlled environments. At PAL Robotics we believe that biped humanoid platforms are the best robotics machinery to serve us in both domestic and industrial environments in the future, in order to increase people’s life quality. To achieve this goal, the team has been putting huge engineering efforts to perfect the robots’ stabilization and control. After 13 years running, PAL Robotics has gathered experience in testing the humanoid platforms in real environments that made us improve the robotic systems. Our most recent development, TALOS, groups some of the most advanced components in robotics with the PAL Robotics learned knowledge to make a step forward in robust humanoid robotics for real-world applications.

Next generation of high quality and robust humanoid robot

by Olivier Stasse, LAAS, CNRS, Toulouse

For almost fifteen years now, the humanoid HRP-2 has been a successful platform to develop and test new algorithms for motion generation and high level behaviors. An important ingredient for this success is its high reliability and the integration of mechatronics constraints simplifying control. In the past years, several high performance robots have been presented such as Atlas, Schaft, Walkman and HUBO-DRC. This new generation of robots are delivering more power by using state-of-the-art good practices for technologies such as hydraulic and electric actuators. In this presentation the pro and cons of this new generation of robots will be presented. An emphasis will be given on the key technological ingredients necessary to pursue the development of next generation controllers on a platform as robust and reliable as HRP-2 and delivering enough power for real applications. The new robot PYREN from the PAL-Robotics serie TALOS will be presented.


Online user: 1