Skip to main content
  • Research article
  • Open access
  • Published:

Search and rescue with autonomous flying robots through behavior-based cooperative intelligence

A Correction to this article was published on 01 June 2019

This article has been updated

Abstract

A swarm of autonomous flying robots is implemented in simulation to cooperatively gather situational awareness data during the first few hours after a major natural disaster. In computer simulations, the swarm is successful in locating over 90% of survivors in less than an hour. The swarm is controlled by new sets of reactive behaviors which are presented and evaluated. The reactive behaviors integrate collision avoidance, battery recharge, formation control, altitude maintenance, and a variety of search methods to optimize the coverage area of camera and heart-beat locator sensors mounted on the robots. The behaviors are implemented in simulation on swarms of sizes from 1 to 20 robots. The simulation uses actual location data, including post-disaster satellite imagery, real locations of damaged and inundated buildings, and realistic victim locations based on personal interviews and accounts. The results demonstrate the value of using behavior-based swarming algorithms to control autonomous unmanned aerial vehicles for post-disaster search and assessment. Three examples of algorithms that have been effective in simulation are presented.

Introduction

WITH little warning, a powerful earthquake shatters the quiet calm of a coastal city, followed shortly by the periodic waves of a brutal tsunami strike. Within minutes, local rescue workers rush to disaster sites, where they are greeted with a morass of broken buildings, piled cars, and splintered debris. Where once streets and fields stretched peacefully, now sit water-inundated lagoons filled with hazardous material. Mobility is extremely limited. Conditions are harsh; it is cold, night is soon to fall, and it is starting to snow. There are debris everywhere; it is hard to even walk.

The workers pull their truck up to a roadblock of over-turned cars. Only a half dozen workers have made it to the site so far. But people are in the water, trapped in cars, trapped in buildings, and there is no time to wait. The rescue workers pull small, cheap quadcopter unmanned aerial vehicles (UAVs) out from the back of their truck. The workers are already cold and wet, thinking about finding casualties, and preparing equipment. They just want to know where to find people, but how can they find anyone in this devastation?

Fumbling with the UAVs, wearing fireman’s gloves, they manage to start flipping the UAVs on. They pause for a moment, trying to remember how to make the things work. But, they do not have to remember. As soon as they are turned on, the UAVs immediately launch and begin their search automatically. Remembering the apps on their mobile phones, the workers open up their “UAV Search” applications. Immediately, an overhead picture of the scenario appears on a map on their phones – it’s the camera feed from the first UAV.

While two of the workers are looking at their phones, a third and fourth are flipping on more UAVs. Three of the UAVs do not even turn on. They must have been damaged somehow. But it does not matter, seven were able to launch. One by one the UAVs fly up into the sky, flock together, and begin a systematic, targeted search of the inundated regions. At first, workers can only see the camera feeds from each of the UAVs. Able to see several feeds on their screen at once, the workers start look up to see where people are. Motion catches their eyes - there, in the top of the parking garage – a group of 12, waving their hands. The workers radio in for a helicopter, targeting the garage.

Then, among the swarm of cheaper UAVs, a better-equipped one is launched. Then another. Suddenly, on the screen, red dots appear. From the “UAV Search” app, a list of locations appears on the left side, organized from highest to lowest probability of a find, by number of people. As the UAVs continue their search, more and more locations are added. The UAVs move in and out of formation as they locate survivors. One worker clicks on the top find. A snapshot of the camera feed at the time of the find is displayed, along with an arrow pointing from the launch site to the location, and a distance measurement. Immediately, the workers know which direction to go, how far to go, and what the site looks like from the air. Seeing that the location is fallen building with no visible signs of a survivor, two rescue workers immediately set out in that direction, knowing the survivor is likely buried in the rubble.

The vignette above is a fictional “what-if” scenario based on real accounts of the 2011 Great Eastern Japan Earthquake and Tsunami (Editorial Office of the Ishinomaki Kahoku 2014). The purpose of the vignette is to share a vision of what could be a significant improvement to post-disaster search and rescue efforts by leveraging teams of autonomous flying robots.

Many sources indicate that the first 72 h of a rescue operation is the most critical (Erdelj et al. 2017) (Tait Communications 2012), though some studies reduce this window to 48 or even 24 h (Bartels et al. n.d.). According to analyses of the 2011 Tōhoku tsunami in Japan, the first 24 h was the most critical (Editorial Office of the Ishinomaki Kahoku 2014). Studies across more than 1000 SAR missions show a survival rate dropping exponentially during the first 18 h after the onset of SAR efforts, dropping to a survival rate that levels off near 0% after 20 h (Adams et al. 2007).

Despite data showing that a concentrated effort to rescue trapped persons during the first few hours after a disaster would likely yield greater effect than any effort made later (Alley 1992) (Macintyre et al. 2006), these efforts are significantly hampered by lack of situational awareness (Editorial Office of the Ishinomaki Kahoku 2014) (Ochoa and Santos 2015) (Shimanski 2005). Indeed, the lack of situational awareness within this critical time frame is one of the most significant problems immediately following a natural disaster (Ochoa and Santos 2015) (Shimanski 2005) (Riley and Endsley 2004). Aid workers cannot rescue survivors if they do not know where survivors are.

Situational awareness, in this context, is the degree to which aid workers are aware of the state of the disaster environment. This state may include locations of survivors, wreckage, roads, weather, water and other hazards, or any other environmental factor that might affect the rescue effort. Situational awareness has been studied and applied in many different military, civil, commercial, and aerospace applications over the past several decades. Emergency services focus on situational awareness as a key factor in reducing risk and increasing safety, especially in disaster search and rescue situations (Shimanski 2005).

Rescue efforts are further hindered by lack of a trained, standing force of aid workers capable of handling the often-huge workload after a major disaster (Alley 1992). This is a challenging problem, as the logistical difficulties inherent to maintaining a highly trained standing workforce capable of handling mass-casualty natural disasters are numerous. The approach described in this article directly addresses these issues and, in particular, the situational awareness problem within the critical 20–24-h time frame using an automated, technical solution.

This article presents an approach to disaster search and rescue, data acquisition, and other types of post-disaster assessment using one or multiple heterogeneous autonomous UAVs. The robots work cooperatively as a swarm while controlled by behavior-based artificial intelligence (also called reactive AI). This research combines behavior-based artificial intelligence, swarm intelligence, pattern search theory, and existing disaster data into a theory of improved search and rescue through the use of autonomous flying robots, also called drones, Unmanned Aerial Vehicles (UAV), or Unmanned Aerial Systems (UAS).

Simulation results generated during the research show the approach described in this article to be both effective and time-efficient. The data show that a swarm of just five UAVs with standard parametersFootnote 1 equipped with the software and algorithms developed in this research can consistently achieve a 90% standard sensor coverage rateFootnote 2 over a 2 km2 area in under 90 min, reaching nearly 99% coverage rate in under 2 h when operating in environments modeled after real tsunami disaster locations. The research shows that it is possible to search a wide range of area in a short time using a swarm of low-cost UAVs. The area can be searched continuously even if one or multiple UAVs in the swarm fail or crash. The swarm requires minimal operator input, freeing up rescue workers for other tasks. Performance using this method, measured as sensor coverage at a certain range over time, is improved compared to existing methods. Ultimately, this approach allows more data to be acquired faster, with less effort, than existing methods.

Actual data regarding the time it takes rescue workers to thoroughly search an area of 2 km2 after a disaster without the use of UAVs varies greatly and is difficult to quantify. Moreover, it is impossible to say how many non-surviving victims may have survived, had they been found sooner. However, interviews suggest it can take days to search the most significantly affected areas (Editorial Office of the Ishinomaki Kahoku 2014). Although the use of individual, separately controlled UAVs is certainly an improvement over no use of UAVs, separately controlled UAVs require constant operator involvement and can still take many hours to achieve a high level of sensor coverage. Therefore, although direct quantitative comparison to existing methods is difficult to make, qualitative assessment supports the conclusion that the approach described in this article is likely to improve access to post-disaster assessment data by a significant margin over existing methods. Whether existing methods take 6 h, 12 h, or 3 days to cover 90% of the disaster area, the 1.5-h benchmark achieved by the five-UAV swarm in our simulation is significantly faster than any of these measures.

Background

The emergence of complex traits and behaviors from interconnected sets of individual parts is a well-researched and documented phenomenon (Arnold and Wade 2015) (Koffka 1922) (Wiener 1948). The use of this phenomenon to create decentralized artificial intelligence (AI) in the control of robots was thoroughly described by Brooks (Brooks 1999). Brooks approaches artificial intelligence from the “bottom-up” by investigating the emergent intelligent patterns of robots equipped with individual, simple behaviors. These robots do not possess centralized control; rather, they react to stimuli (in the form of sensor input) in a variety of relatively simple ways. From these simple interactions, intelligent behavior emerges. This approach is known as behavior-based artificial intelligence. In behavior-based AI, a robot’s intelligence is based on a set of relatively simple, independent behaviors, rather than on a centralized control unit.

Brooks implements behavior-based artificial intelligence theory using an architecture he calls the “subsumption architecture.” In his work, robots’ behaviors “subsume” each other depending on the results of a variety of inputs, such as sonar and pressure sensor data. Only one behavior will be active at any given time. The active behavior varies based on sensor data. Brooks successfully implemented this architecture on a variety of applications requiring artificial intelligence, such as navigation and motor control (Brooks 1999). The subsumption architecture can be considered one implementation of behavior-based artificial intelligence, which is itself a broader concept.

The behavior-based approach was applied to research on swarm intelligence by Kennedy and Eberhart (Kennedy et al. 2001). Swarm intelligence is the resultant intelligent behavior of groups of independent heterogeneous entities behaving as a single system, such as a flock of birds, swarm of ants, or a hive of bees. Individually, the entities in the swarm may not have an understanding of the workings of the system as a whole. There may not be a single focal point of control over the swarm. However, in some way, the swarm still manages to work together as a single system to accomplish a goal. An ant swarm finds food sources, gathers food, and even builds complex structures at times. A flock of birds avoids predators and successfully migrates. Bees gather nectar for the hive over a wide range of conditions and environments. Theories of behavior-based, or reactive, intelligence apply to these swarms of entities. Swarms often function in an intelligent manner through the reactive behaviors implemented by their entities. Through the reactive behaviors of many individual entities, intelligence emerges (Kennedy et al. 2001).

Behavior-based formation control was applied to groups of robots by Balch and Arkin (Balch and Arkin 1998). They successfully integrated formation behaviors with navigation and hazard avoidance both in simulation and on a set of land-based ground vehicles. The robots’ speeds and turn directions were influenced through a system of votes based on sensory inputs and communication between robots in the group. Several other related papers on formation control for groups of robots were published around the same time frame (Balch and Arkin 1998).

Virágh and Vásárhelyi applied principles of flocking behavior to UAVs (Virágh et al. 2014) (Vásárhelyi et al. 2014). Virágh applied agent-based models to the control of flocks of UAVs, incorporating principles of time delay in communication as well as inaccuracy of onboard sensors. Two decentralized algorithms are proposed in their research: one based on the collective motion of a flock, the other based on collective target tracking. A principle of their research is to use a realistic simulation framework to study the group behavior of autonomous robots.

Swarm algorithms for controlling groups of UAVs are also under exploration for defense systems by the US Department of Defense (Frelinger et al. 1998). Their purposes range from combat search and rescue to ballistic missile defeat, in which many of the fundamental techniques used for targeting in defense systems are similar in principle to disaster search and rescue. In both scenarios, swarms of UAVs build upon cooperative behavior-based intelligence to efficiently locate one more multiple targets.

A team from the Naval Postgraduate School designed a swarm control framework called the Service Academy Swarm Challenge (SASC) architecture. The SASC architecture is used to control swarms of heterogeneous robots using the C++ and Python programming languages. SASC has undergone successful field tests deployed on swarms of fixed-wing and quadrotor UAVs.

Additionally, a programming language called Buzz has been specifically designed to facilitate heterogeneous swarm robotics (Pinciroli and Beltrame 2016). Buzz allows behaviors to be defined from the perspective of a single robot or from the overall swarm. This programming language is capable of running on top of other frameworks and can be extended to add new types of robots.

For the purpose of disaster search and rescue, behavior-based control of land-based robots was implemented in the HELIOS system (Guarnieri et al. 2009). The HELIOS system consists of five land-based, tracked robots used for urban search and rescue. Two of the robots are equipped with manipulators to perform physical tasks, and the other three are equipped with cameras and laser range finders and are utilized to create virtual maps of the environment. The robots can be used separately or as a team for more complex missions. The three robots equipped with laser range finders can move autonomously in unknown environments using a collaborative positioning system. The system as a whole requires control by a human operator.

The use of unmanned aerial systems in search and rescue is an area of high interest (Erdelj et al. 2017) (Molina et al. 2012) under consideration by a number of high profile organizations, including the American Red Cross, NASA, and the Japanese Ministry of Defense (American Red Cross 2015). Many efforts in this area have included the use of individually piloted UAVs, rather than autonomous swarms of robots (Erdelj et al. 2017). For example, the European CLOSE-SEARCH project includes the deployment of a single UAV with a ground-based control station to locate someone lost outdoors (Molina et al. 2012). The value of UAVs for information-gathering and situational awareness acquisition has been expressed by a number of sources (Erdelj et al. 2017) (Molina et al. 2012) (American Red Cross 2015). Researchers at Carnegie Mellon are investigating the use of swarms of tiny UAVs to map the interiors of buildings after disasters (Williams 2015). However, research into the use of swarms of autonomous UAVs to aid in locating survivors during exterior search and rescue appears to be minimal.

Although UAVs and Unmanned Ground Vehicles (UGVs) are already in use for disaster search and rescue (Erdelj et al. 2017) (Molina et al. 2012) (American Red Cross 2015), the use of swarms of UAVs optimized to autonomously cover a disaster area, streaming useful data to operators and each other while avoiding collisions, weaving over and around obstacles, and returning to charge batteries, has been largely absent. This absence seems to be due to a combination of air traffic regulations, laws restricting the use of UAVs, and technical limitations which, until recently, have been difficult to overcome.

Due to these challenges, the control of autonomous swarms of UAVs is a relatively new phenomenon. The Naval Post-Graduate School in Monterey, California, flew a swarm of 50 UAVs controlled by a single operator in 2015 as part of their Zephyr system. At the time, this event is believed to have set the world record for the most UAVs under single operator control (Hambling, 2015). The use of swarms of UAVs to aid in post-disaster assessment was imagined in 2016, in a report describing a human-machine interface to control the UAV swarm.

The Orchid disaster response system under development by the UK appears to be the closest to the approach described in this article (Ramchurn et al. 2016). It uses decentralized control of a swarm of UAVs to enhance disaster rescue efforts. The Orchid system is designed to interpret crowd-sourced data, building a picture of a situation and providing recommendations for resource allocation. In contrast, this article describes behavior sets and algorithms used to control UAVs to maximize sensor coverage over areas of land and water. This article also presents the results of simulated time trials using swarms of UAVs. The UAVs are controlled by three different behavior sets to search a realistically designed post-disaster location. Data of this particular nature does not appear to be present in the literature.

Distributed coordination is key to enhancing the scope and level of detail of post-disaster assessment. By distributing the workload among many units, the amount of work and the time it takes to do the work is significantly reduced. This also allows scaling the system to larger or smaller areas by simply adding or subtracting units from the swarm. Controlling these individual units through behavior-based artificial intelligence allows them to react successfully to a variety of challenging, changing situations with minimal or no operator input. The behavior-based method of robot control has been a staple of robotics for the last several decades and has a proven track record of success.

Recent technological developments have made modern UAVs more capable and cost-effective, enabling the use of coordinated swarms at reasonable cost.Footnote 3 UAVs can be equipped with built-in hover and maneuver capabilities as well as high definition (HD) and/or infrared (IR) cameras, wireless capabilities to stream live data, and the ability to carry small payloads or additional sensors. This combination of traits has now enabled the practical use of swarms of small, cost-effective UAVs for post-disaster assessment. In order to propel these efforts forward, it is important to demonstrate the significant time-saving effects that the use of such swarms can produce in post-disaster situations. Furthermore, developing and assessing different algorithms to control the swarm as a single, distributed system while also maintaining the individual capability of each separate unit in the swarm is key to the success of this type of system on the whole.

The research described in this article applies the concepts of behavior-based and swarm-based intelligence to control groups of UAVs to locate survivors in disaster search and rescue scenarios. By using data gathered from town records, in-person interviews, survey data, and site visits, several scenarios were built out that depict the post-tsunami environment in 2011 Sendai City, Japan, with a large degree of accuracy. The heights and placement of structures are accurate, and the locations and behaviors of survivors within the scenario are based on real accounts (Editorial Office of the Ishinomaki Kahoku 2014) (Municipal Development Policy Bureau 2017) (Post-Disaster Reconstruction Bureau 2015) (Sato 2015) (The Center for Remembering 3.11 2015) (Tohoku Regional Development Association n.d.).

The algorithms used in this research allow the UAVs to dynamically respond to changes in the environment, as well as unknown scenarios and unforeseen circumstances. For example, sensors can malfunction and the UAVs will still retain some measure of utility. A building can be “dropped” in front of a UAV in the simulation, and the UAV will successfully navigate around or over the building, then continue its task.

A dynamically changing environment is a key part of a disaster scenario. Unless injured or safe, survivors do not often stay still. People move to higher floors in buildings. They move towards lights, sounds, higher ground, helicopters, and safety (Editorial Office of the Ishinomaki Kahoku 2014). The weather gets cold, it may start to snow or rain, and the sun may go down (Editorial Office of the Ishinomaki Kahoku 2014). Night falls, day breaks, visibility changes. Any rescue approach needs to have the flexibility to accommodate these dynamic changes and respond to unknown environments. Our approach demonstrates this flexibility.

Hypothesis

A swarm of standard, commercially available autonomous UAVs controlled by behavior-based, cooperative artificial intelligence software may significantly improve the data set containing known victim locations during disaster search and rescue efforts with minimal operator input required. For the purposes of this research, several requirements are imposed on the algorithm sets used to achieve this hypothesis. The intent of these requirements is to provide a practical, flexible system:

  • Performance—Gather more data faster

  • Achieve a simulated standard sensor coverage (30 m range) of 90% across 2 km2 within 24 h.

  • Achieve a simulated precise sensor coverage (15 m range) of 90% across 2 km2 within 24 h using a simulated, miniaturized FINDER sensor.Footnote 4

  • Scalability—Support any number of robots

  • Supports an arbitrary number of UAVs in the swarm. Due to computational limits during simulation executions, a maximum of 20 of UAVs was used in this research.

  • Heterogeneity—Support mixed groups of robots and sensor configurations

  • Different capabilities and sensor configurations supported within the same swarm.

  • Different UAV types and models supported within the same swarm.

Approach

Behavior-based artificial intelligence

Behavior-based artificial intelligence is the concept that intelligence can emerge through the interactions of simple, individual behaviors lacking centralized control. Combining several well-defined but separate behaviors can result in the emergence of intelligent systemic behavior. When used in software and robotics, this approach can provide a high level of robustness, as failed behaviors can be ignored while default behaviors are activated (Brooks 1999). The division of logic between behavior modules can allow the system to scale to a high level of complexity without imposing an unmanageable cognitive load on software developers.

Although there are many ways to design robust systems, systems designed with a behavior-based approach to AI are well-suited to reacting to environments dynamically based on sensor inputs without prior knowledge (Brooks 1999). These properties are highly desirable in a post-disaster assessment system operating in a volatile environment where the failure of individual parts of a system may be common due to hazardous external factors.

Proposed technique

To enhance post-disaster assessment, search and rescue, and information gathering, we propose using a technique that combines behavior-based artificial intelligence with cooperative swarm behavior. Individual units of a swarm equipped with behavior-based AI are well-suited to perform cooperative tasks (Kennedy et al. 2001), as the results of their own behaviors combine together to emerge as individual unit behaviors, and these unit behaviors combine together to emerge as collective swarm intelligence (systemic behavior).

We implement behavior-based AI and cooperative behavior in a simulated swarm of UAVs to search for disaster survivors in a post-disaster environment. We measure the effectiveness of the approach by recording the detection rates over time of the survivors by the swarm. Our goal is to reach a 90% detection rate in under 24 h in simulation.

This approach can be applied to any sort of information gathering and is not limited to just search and rescue. However, using search and rescue gives a direct, tangible way to understand the benefits and effectiveness of the approach.

Proposed algorithms and control methods

To enhance survivor detection through the use of UAV swarms, several control methods are considered (Fig. 1). These methods are all implementations of behavior-based AI. Each control method, also referred to as a method or an algorithm, is simply a set of ordered behaviors conceived of and developed during the research. The order of the behaviors within each method is critical as it determines the priority level at which they are executed. As behaviors can be grouped and ordered in many different ways, it is important to figure out which set of behaviors, and in which order, is most effective. The three sets of behaviors (methods) were selected based on the anticipated effectiveness of each set of behaviors as determined by the researchers.

  • Standard method—UAVs all follow the same pattern.

  • Spiral method—Upon locating a “critical mass” concentration of survivors, a single UAV moves outward in a spiral pattern, then returns to previous search method.

  • Scatter method—Each UAV simultaneously moves to a different location in the search pattern.

Fig. 1
figure 1

UAV search methods. Actual patterns are more complex; the patterns depicted here are simplified for clarity. Blue dots are UAVs, gray areas are destination targets, and red triangle is a concentration of survivors. From left to right: standard, spiral, and scatter

The behaviors in the behavior-based software architecture used in this research are all original and were conceived of and created by the researchers. They are implemented as separate, named, plug-and-play software modules. Each of the three control methods consists of some subset of the following 12 behavior modules. These modules are described in detail in the “Method implementation” section and briefly here:

  1. 1.

    Launch—Take off from a stationary position

  2. 2.

    Avoid—Avoid collisions with buildings and obstacles

  3. 3.

    Climb—Climb over obstacles

  4. 4.

    Recharge—Recharge batteries

  5. 5.

    Height—Maintain a certain height above the ground, buildings, or large objects

  6. 6.

    Spiral—Move out in an expanding spiral

  7. 7.

    Form—Maintain distance between other UAVs

  8. 8.

    Repel—Move away from other UAVs when too close

  9. 9.

    Seek—Move directly to a specified GPS location

  10. 10.

    Waypoint—Move towards a preset pattern of waypoints

  11. 11.

    Scatter—Move individually towards an unallocated waypoint among a set

  12. 12.

    Wander—Choose a random location and move towards it

These behaviors were conceived based on deductive reasoning, literature search (Brooks 1999) (Kennedy et al. 2001), and extensive trial and error in simulation. Each behavior is assigned a priority. The UAV control software arranges priorities by the order the behavior modules are loaded into the software. Earlier behaviors, when triggered, prevent later behaviors from occurring at the same time. That is to say, if the avoid behavior is active at a given time, no behaviors at a lower priority than avoid in the list will be activated (such as height or recharge). A given time in this situation refers to a given tick in the software, which is approximately 15–16 ms. This measure is consistent with the duration of a tick used in personal computers running Microsoft Windows, Apple macOS, or Linux, and mobile operating systems used in UAVs such as the Google Android operating system and iOS.

The UAV re-checks its sensor input at a rate of roughly 60 Hz (or 60 frames per second), or every 16 ms; thus, reactions that result in the activation of different behaviors occur quickly and often blend together in the eye of the watcher to seem integrated. Perhaps, this type of behavior is even at the core of evolved intelligence (Brooks 1999) (Kennedy et al. 2001).

The order of behaviors is critically important to the overall operation of the system. For example, if the height behavior was prioritized over the recharge behavior, the robot would never be able to charge its batteries. Every time it tried to land at the battery charging station, the height module would make it climb again! If the avoid behavior was ordered below seek, the robot would run into obstacles and likely crash while moving to its destination. Thus, the emergent intelligence of these robots is a product of the careful, simultaneous consideration of both wholes and parts (Arnold and Wade 2015). The desired result emerges from the determination of what each behavior should do in the context of the others and how the behaviors are correctly prioritized as a whole system.

A major advantage to this approach is flexibility in the software; in the software designed for this research, behavior modules can be coded and inserted by outside parties. A simple configuration file determines their load order (priority), and they can be added to the system by simply placing the compiled behavior module in the Behaviors folder on the host computer’s hard drive. In this way, the simulation system is extremely flexible in that it allows testing of all sorts of behaviors and orders without requiring any changes to the base system.

Method implementation

The details of each of the behaviors and control methods are explained in this section. It is important to note that UAVs are continuously broadcasting their own locations over a wireless network and receiving and processing the locations of other UAVs.

Launch—Take off from a stationary position

Activation: Robot is not flying, is within 10 m of deployment location, and has at least 99% battery life.

Actions:

  • Begin ascending. Note that nothing more is needed; once the robot is flying, the height module will take over and bring it to the correct altitude.

Results: Robot will ascend from a previously landed position.

Avoid—Avoid collisions with buildings and obstacles

Activation: Potential collision detected based on speed, angle of movement, acceleration, and location of nearby objects as reported by sonar sensor.

Actions:

  • If moving faster than acceleration rate, decelerate.

  • If moving slower than acceleration rate, accelerate full speed at a 200° angle from current heading. This essentially turns the robot in the opposite direction of the imminent collision, at a slight 20° angle difference. The 20° angle difference prevents the robot from moving straight backwards, and then forwards again into the same situation as the previously executed behavior takes over.

  • If, after 12 s, robot is still within 2 m of the original location, change the deflection degrees from 200 to 160 (20° angle on the other side of the opposite.

Results: Robots will “bounce around” objects in their way.

Climb—Climb over obstacles

Activation: An obstacle is closer than 5 m as detected by sonar sensor.

Actions:

  • Accelerate upwards at maximum acceleration, until obstacle is not detected horizontally to robot.

  • Stabilize horizontal movement during upwards acceleration.

Results: As a robot nears an obstacle, it will ascend up over the obstacle, where the height module then takes over and brings the robot to the appropriate height above the obstacle.

Recharge—Recharge batteries

Activation: Less than 5 min of battery life left.

Actions:

  • Move directly to deployment location at 75% of maximum speed.

  • If within 3 m of deployment location, reduce speed until stabilized, then land.

Results: When a robot’s battery becomes low, it flies directly back to the deployment location and lands.

Height—Maintain a certain height above the ground or large objects

Activation: Closest object below robot is six or more meters away or four or less meters away.

Actions:

  • If closest object is six or more meters away, descend at maximum acceleration.

  • If closest object is four or less meters away, ascend at maximum acceleration.

Results: Robots tend to maintain the desired height above objects below them.

Spiral—Move outwards in an expanding spiral

Activation: Four or more survivors detected within a 10-m radius of each other.

Actions:

  • Move in an expanding spiral from the center point of the located survivors until reaching a 100-m radius.

Results: This behavior can be equated to the “expanding square” visual search pattern (Washington State Department of Transportation 1997) but is implemented as an expanding circle instead of a square. When the UAV detects a concentrated group of survivors, it begins to spiral outwards from the center location of the survivors. As survivors often congregate in larger groups and move towards groups, it is theorized that this behavior will lead to the discovery of additional survivors that may not have been able to reach the detected group.

Form—Maintain 50 m ± 5 m distance between other robots

Activation: Closest robot is either within 45 m or more than 55 m away.

Actions:

  • If within 45 m, accelerate in opposite direction of closest robot at maximum acceleration rate.

  • If more than 55 m away, accelerate towards closest robot at maximum acceleration rate.

Results: This is a type of flocking behavior (Kennedy et al. 2001). Robots tend to group up together and stick together in large groups. Small groups can split off, but as they move near each other, they tend to re-engage the larger group.

Repel—Stay at least 10 m away from other robots

Activation: Closest robot is within 10 m.

Actions:

  • Accelerate in opposite direction of closest robot at maximum acceleration rate.

Results: This behavior prevents robots from moving too close to each other in the absence of a flocking behavior such as form.

Seek—Move directly to specified GPS location

Activation: Seek location specified, and robot is more than 10 m away.

Actions:

  • Accelerate towards specified location at maximum acceleration rate.

Results: Robots can be ordered to move directly to specific locations.

Waypoint—Move towards a preset pattern of waypoints

Activation: Set of search waypoints exists.

Actions:

  • Accelerate at maximum rate towards current waypoint.

  • Once waypoint is within camera detection range, broadcast completion of waypoint over wireless network and set next waypoint as current waypoint.

Results: As the UAVs act as a single entity, they “compete” to reach the next waypoint. No single UAV is in charge, and there is no “leader” UAV. Any UAV that reaches the next waypoint will send a message to all other UAVs declaring that the waypoint has been reached. Upon receipt of this message, the UAVs will begin to move to the next waypoint. Thus, as a single system, the UAVs can be assigned one set of waypoints and they will effectively explore every waypoint as a swarm. In essence, waypoints tell the swarm to ensure that some part of your swarm, any part, covers this waypoint. In the simulations used, UAVs communicated their waypoint information via Wi-Fi. Thus, delays or long distances in Wi-Fi could have an effect on the swarm’s behavior as a whole.

The waypoint search used in this research resulted in a version of a search called “parallel track” or “parallel sweep” (Washington State Department of Transportation 1997) performed as a swarm. Also, when this behavior combines with avoid, the UAVs perform a variation of the “contour search” (Washington State Department of Transportation 1997) because they automatically avoid collisions. These are some of the interesting emergent properties of the interactions between simple behaviors.

Scatter—Move towards a pre-defined search pattern waypoint which is not already allocated to another UAV

Activation: Set of search waypoints exists.

Actions:

  • Accelerate at maximum rate towards current waypoint.

  • Once waypoint is within camera detection range, broadcast completion of waypoint over wireless network and set next waypoint as current waypoint. Next waypoint must not be the current waypoint of any other UAV in the system.

Results: The swarm of UAVs scatters across the disaster area, searching multiple different locations simultaneously.

Wander—Choose a random location and move towards it

Activation: Always. Note that this behavior is rarely activated in a fully functioning system because it is almost always subsumed by some other behavior.

Actions:

  • If location sensor exists and is functioning, choose a random wander location 100 m away and accelerate towards it at half speed.

  • If within 10 m of current wander location, choose new location.

  • If a location sensor does not exist or is malfunctioning, set a random target heading and proceed at half speed.

  • After traveling for 1 min at current heading, change to a different heading.

Results: This behavior is included for robustness. Wander is a default behavior in case other behaviors crash or fail to execute for any reason. If all else fails, a UAV will try to wander to a new location which may have different sensory inputs and/or different terrain, facilitating a better result.

Table 1 shows the behaviors used by each control method. Although it may appear that these methods are similar in that they use many of the same behaviors, notably most of those behaviors are a necessary foundation to the successful function of any higher-order robot behavior. A living being must eat, drink, and breathe before she can do more complex tasks. In the same way, our UAVs must launch, avoid obstacles, and maintain height before they search for disaster survivors. The essential, method-defining behaviors are the ones included, or left out of, each method.

Table 1 Behaviors in each control Method

Standard method

A swarm of UAVs operating the standard method behavior set (Fig. 2) will launch then proceed to the first waypoint in their search pattern (Fig. 3). Along the way, they will maintain appropriate distances between each other by continuously broadcasting their locations over a wireless network, avoid collisions with obstacles through maneuvering around or climbing over, and maintain proper height. When the first UAV in the swarm reaches the current waypoint location, it broadcasts this data to the rest of the swarm. As the UAVs receive this data, they begin moving towards the next waypoint in the search pattern. In some cases, UAVs on the far side of the swarm may already be close to the new waypoint. The result is that a large swarm of UAVs may “zig-zag” between locations in a way that can be efficient, whereas a smaller swarm of just one, two, or three UAVs may actually fly back and forth between the waypoints. Both methods maximize coverage area and follow the same behavior software, though an observer will notice significant differences in the actual flight paths of the UAVs and may conclude (incorrectly) that they are actually using different artificial intelligence software.

Fig. 2
figure 2

Standard method behavior set

Fig. 3
figure 3

Standard method showing the paths of three UAVs launched from the blue rectangle on the center left. Red, yellow, and green dots are survivors in different states of discovery. In this scenario, UAVS moved in a search pattern across the area starting in the northwest and ending in the southeast. Photograph by Geospatial Information Authority (GSI) of Japan (Geospatial Information Authority of Japan 2011)

Upon a low battery indication, a UAV will break from formation and return to its deployment location, land, and recharge its batteries. When the recharge is complete, the launch behavior will detect a full battery and automatically activate. The robot will then launch and proceed to the next waypoint, likely meeting up with the rest of the swarm along the way.

While following this method, it is possible and likely that robots will break into smaller groups as they recharge their batteries and return to the field. The design and architecture do not prevent or discourage this, and it is an emergent result of the complex interactions of simple behaviors.

Spiral method

The spiral method uses the standard method but implements an additional behavior: spiral, which is inserted after height and before form in the behavior priority list.

The spiral method behavior set (Fig. 4) operates similarly to the standard method, but differs in one significant way. While engaging in the standard method search, when a UAV’s spiral behavior is activated through detection of a concentration of survivors, the UAV “breaks away” from the group and performs a spiral maneuver out to a 100-m radius (Fig. 5). After completing this maneuver, the robot returns to its regular formation within the group. Within the software architecture, the only requirement to implement this method is the insertion of the spiral behavior module in the correct place in the behavior list. No other changes need to be made. That such a change can be made so simply is one of the advantages of the behavior-based artificial intelligence paradigm.

Fig. 4
figure 4

Spiral method behavior set

Fig. 5
figure 5

Spiral method showing the paths of three UAVs. As with standard method, UAVs launched from the blue rectangle. Note the circular pattern in the northeast corner as a UAV located the group of survivors (green dots) on top of the elevated building and performed the spiral behavior while the others continued the search. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011)

The spiral method accounts for evidence gathered during disaster search and rescue (Editorial Office of the Ishinomaki Kahoku 2014) (A. E. S. M. Staff Member 2017) showing that survivors are likely to group together following a disaster. If a few people are found together, it is likely that more are present as well. Spiraling outwards from the locations of the first few people found is likely to result in the discovery of new survivors.

The distressed person density information could be used by rescue workers in many ways, such as determining where and when to send rescue vehicles such as helicopters or boats. Also, the spiral method may result in the discovery of distressed persons attempting to unite with the group, and coming close, but failing to cover the last bit of distance due to insurmountable obstacles, as happened during the 2011 tsunami (Editorial Office of the Ishinomaki Kahoku 2014).

Scatter method

The scatter method differs from standard and spiral methods significantly in that it sends each UAV to a different point in the search pattern. The waypoint behavior module is removed completely and replaced with a scatter module. Also, the form module is replaced with the repel module.

The scatter method (Fig. 6) represents a significant diversion from both the standard and spiral methods. Although this method is still cooperative, rather than operating as single flock with all robots seeking the same point then switching to the next when any one UAV reaches the point, using the scatter method, each UAV has its own destination point which is different from all the others (Fig. 7). Theoretically, this allows the swarm to spread over a larger area in shorter time.

Fig. 6
figure 6

Scatter method behavior set

Fig. 7
figure 7

Scatter method showing the paths of three UAVs. As with standard method, robots launched from the blue rectangle. However, each UAV proceeded to a different location in the search pattern, scattering them across the area. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011)

Destinations are selected based on a staleness factor, that is, points that have not been reached yet by the swarm as a whole are highest priority, whereas points that have been visited further in the past are slightly lower, and points that have been recently visited are the lowest in priority. If one UAV is already seeking a point, a different point is chosen. If all points are already chosen, the UAV chooses an optimal point based on staleness factor. Using this method, the swarm of UAVs will effectively scatter across the disaster area, searching multiple different points simultaneously.

Although in theory the scatter method might appear to be a better option than standard or spiral methods given that different UAVs are able to explore different locations in parallel, in practice, a swarm of UAVs flocking together significantly increases the probability of survivor detection. Sensor range is limited, and a group of UAVs flocked together maintaining a certain distance from each other effectively forms a large, single system with a combined, redundant sensor range. Without flocking, a single UAV’s sensor range is limited; therefore, as locations are explored separately, the search pattern must necessarily be quite complex or contain a large number of waypoints to approach the same level of effectiveness as the other methods. In this case, a hybrid method between scatter and spiral could be more effective.

Performance analysis

Assumptions

While developing the simulation software used in this research, several assumptions were made about the UAVs:

  • Programmable—The UAVs are programmable in that they are controlled by modifiable software and can receive commands to change speed and direction.

  • Quadcopter—UAVs are standard multirotor helicopters lifted and propelled by four rotors.

  • Stability control—UAVs have built-in stability control that allows them to hover stably in one location or can be easily equipped with equivalent Commercial Off the Shelf (COTS) software to provide this effect.

  • Network unavailable—Due to loss of infrastructure and other inherently challenging circumstances during most disaster search and rescue situations, it is assumed that a commercial Internet network may not be available. The UAVs will set up their own ad hoc network to communicate with each other. This network is not dependent on existing network infrastructure.

Parameters

The simulation software allows the selection of different commonly available off-the-shelf UAVs. It also allows UAV parameters to be customized. For the scenarios used in this research, Table 2 shows the parameters that were used in the simulation based on current commercially available data.

Table 2 UAV parameters used in simulation

Sensors and equipment

In addition to the software behavior modules, UAVs are provided with simulated sensors and equipment values to be customized (Table 3). Collision avoidance depends on sonar sensors. One sonar sensor is mounted down-facing, while the others are outward-facing from the left, right, forward, and rear sides of the UAV. The sonar data is fused together to form a single sonar sensor picture. Formation and flocking behavior depends on both sonar sensors and the GPS. Communication between UAVs, and therefore cooperative swarm behavior, depends on the Wi-Fi HD communicator. UAVs determine their own locations, and, by extension, which direction to travel to reach a waypoint, by using the GPS sensor. The behavior modules are highly dependent on the input from these sensors.

Table 3 Sensors and equipment used in simulation

These sensors can be turned off or on, or “broken” in the simulation to simulate how a UAV will behave in different practical situations. The range and effectiveness of the sensors can also be adjusted. This allows the designing of a robust system prior to actual deployment and hardware testing.

The UAV’s camera is mounted in a down-facing position on the bottom of the chassis. Although a camera radius of just 15 m may seem small, the intent of this range is to capture difficult environmental conditions such as fog, snow, rain, and debris, which may interfere with a camera’s range of vision. A 15-m radius provides a conservative estimate that likely falls within the effective parameters of a wide range of commercially available cameras and sensors.

Simulation scenario

The environment chosen to be simulated for this research was in a town called Arahama, in Wakabayashi, Sendai City, Miyagi Prefecture, Japan, one day after the 2011 Great Eastern Japan Earthquake and Tsunami. This location was chosen because it was one of the hardest hit by the tsunami, and a great deal of data were available on the town, including satellite imagery, population, physical layout, timetable of the tsunami, search and rescue data, personal interviews, and locations of survivors. Within this environment, three different patterns were considered when setting the locations of distressed persons within the simulation (Fig. 8):

  • Random—Distressed persons were scattered at random across the search area.

  • Congregated—Distressed persons were concentrated at likely rescue locations according to data from a variety of sources. For example, schools, parking decks, and other tall buildings contained more survivors while low areas contained few, if any (Editorial Office of the Ishinomaki Kahoku 2014) (Municipal Development Policy Bureau 2017) (Post-Disaster Reconstruction Bureau 2015) (Tohoku Regional Development Association n.d.) (A. E. S. M. Staff Member 2017).

  • Mixed—Half of the distressed persons were congregated and the other half random.

Fig. 8
figure 8

Survivor distribution patterns. Gray boxes are buildings, red dots are survivors. From left to right: random, congregated, mixed

The mixed pattern was selected and used for our research. Although the congregation pattern is based on real data acquired at Arahama (Editorial Office of the Ishinomaki Kahoku 2014) (Municipal Development Policy Bureau 2017) (Post-Disaster Reconstruction Bureau 2015) (The Center for Remembering 3.11 2015) (Tohoku Regional Development Association n.d.) (A. E. S. M. Staff Member 2017), randomly scattered survivors should not be discounted as it could be that they were simply not found during rescue efforts. Therefore, the mixed pattern is the best fit for this research. Practical algorithms should show greater effectiveness at congregation-heavy patterns than at random patterns.

For the purpose of the simulation, satellite imagery of the actual location was acquired (Figs. 9 and 10). Ideally, a photo immediately following the tsunami strike would be desirable. Unfortunately, such imagery was not available; this image was taken on March 12, 2011, the day after the tsunami strike. Building locations were placed according to the imagery and checked against height data as well as cross-referenced against actual photos and on-site interviews with local residents.

Fig. 9
figure 9

Satellite photo of the town of Arahama taken on March 12, 2011 (Geospatial Information Authority of Japan 2011). Left is full photo, and right is sample of 300 m2 sub-section built in DroneLab environment builder showing red buildings and red dot survivors. The large buildings in the upper left corner of the right photo are the ruins of the Sendai Arahama Elementary School, a primary evacuation site during the tsunami. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011)

Fig. 10
figure 10

Left is northwest corner of satellite photo in Fig. 9, and right is the same area after build-out using DroneLab environment builder. Buildings shown as red rectangles and survivors as red dots. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011)

Validation of the disaster area model

The model of the disaster area was built by overlaying structural data on the satellite photos shown above, resulting in a high level of face validity. The heights of the buildings were determined by on-site survey and measurement. As time did not allow for all buildings to be measured and some have in fact been demolished since 2011, buildings that could not be directly measured were assigned height data based on their types, locations, and designs. For example, in a row of similar houses, the height of a single house may have been measured and then used for all similar houses.

To accurately represent survivor distribution in the simulation model, data from a variety of sources were used. These data can be collated to show a pattern in which groups of certain numbers of survivors gathered at certain places within the town (Editorial Office of the Ishinomaki Kahoku 2014) (Municipal Development Policy Bureau 2017) (Post-Disaster Reconstruction Bureau 2015) (The Center for Remembering 3.11 2015) (Tohoku Regional Development Association n.d.) (A. E. S. M. Staff Member 2017).

As no data is available on the locations of victims lost to the tsunami in Arahama, a random distribution pattern was chosen to represent the remainder of the town’s population. The mixed pattern using the real data combined with the use of random distribution for the remaining survivors, based on the total population of the town, is considered a reasonable way to represent the survivor locations in the simulation model based on available data.

The model was validated by comparing the locations and heights of buildings, numbers of survivors, and congregated groups of survivors to satellite photos, aerial photos, and records obtained in Arahama detailing the events during and immediately following the tsunami. The resultant simulation model was used as a base for the simulations performed during this research.

Results in simulation

Results were generated using the DroneLab Unmanned Aerial System (UAS) simulation software sponsored by the Japan Acquisition, Technology, and Logistics Agency’s (ATLA) Air Systems Research Center (ASRC).Footnote 5

DroneLab runs on multiple platforms, including macOS, Unix-like operating systems, or Microsoft Windows machines, using the Java environment. The simulation environment is user-definable, displaying either an image as a background or a blank field of 2000 × 2000 m. A background image is typically a satellite photo of arbitrary size. The environment is three dimensional, displaying both a two-dimensional top-down view and a three-dimensional view. Various sizes, heights, and dimensions of square, circular, and rectangular objects can be placed on the field both before and during a simulation. Survivors can also be placed on the field at specific locations and/or distributed randomly. Deployment locations for rescue workers can be placed as rectangular areas on the field. The aerial robots are displayed as circles with spinning bars in their centers, whereas the obstacles are red objects in the two-dimensional view and yellow objects in the three-dimensional view. Survivors are shown as red dots on the field, turning yellow and finally green based on their states of discovery. Sensor range displays can be toggled on and off from the simulator’s user interface.

DroneLab allows the inclusion of one or many robots equipped with simulated sensors and equipment and supports the addition of pluggable behavior modules written in the Java programming language. It includes a physics engine that allows specification of speeds, acceleration rates, and various other physical properties, and provides collision checking and gravity. DroneLab allows the acceleration of time and the addition of obstacles “on-the-fly” to create a dynamic virtual environment.

Figures 11 and 12 show the results of the simulation when applied to swarms of UAVs using the parameters presented in Table 2. The percentage of survivors seen over time by the IR camera, referred to as camera coverage, was chosen as the measure to display, as its range can be generalized to many other sensors. Each UAV’s simulated camera’s detection radius was limited to 15 m as a way to account for environmental conditions such as darkness, fog, rain, snow, and debris. The camera coverage percentage shown in the vertical axis of the figures is a measure of the number of total survivors detected by the camera of any UAV divided by the total number of survivors in the simulation, the ratio of detected to undetected survivors by the swarm as a whole. Survivor distributions use the mixed method described previously in Fig. 8. Three hundred fifty survivors were congregated on and around likely evacuation sites (Editorial Office of the Ishinomaki Kahoku 2014) and 300 were scattered randomly across the disaster area, for a total of 650 survivors. According to sources from Arahama (Editorial Office of the Ishinomaki Kahoku 2014) (Sato 2015) (A. E. S. M. Staff Member 2017), the number 650 is roughly equal to the population of the local area at the time of the tsunami. The time axis shows simply the hours, minutes, and seconds since the UAV swarm was deployed.

Fig. 11
figure 11

Average percentage of survivors found over time, referred to as camera coverage, by a swarm of five UAVs across six simulation runs. Scatter method was the slowest and spiral method the fastest to reach the goal of 90%. The jump in coverage just after 43 min occurs when the swarm encounters an evacuation center such as a school in which many survivors are co-located

Fig. 12
figure 12

Average camera coverage rates of swarms of five, 10, and 20 UAVs for all three methods shown in parallel. ST is standard, SP is spiral, and SC is scatter. Scatter method with five UAVs was the slowest and spiral method with 20 UAVs the fastest to reach the 90% coverage goal

Figures 11 and 12 show that in all of these results, every situation resulted in the achievement of 90% or more camera coverage in under 2 h. Swarms of 10 or 20 UAVs using the standard or spiral method were consistently able to discover 90% or more of simulated survivors in less than an hour. Left running for 4 h, swarms of 10 or more UAVs consistently achieved a 98% or 99% location rate as they re-ran their routes in flocking formation. Similar algorithm differences were observed when the UAVs were equipped with simulated 5-m-ranged FINDER sensor instead of the camera, though discovery times generally increased by 30–50% with the shorter-ranged sensor.

These results are significant as they show that there is the potential to spot 90% of visible survivors of a disaster situation, even in hazardous, non-drivable, or inundated areas, in under an hour with little operator intervention using the proposed technique. This is well within the 24-h time limit suggested as optimal for disaster response (Editorial Office of the Ishinomaki Kahoku 2014) (Bartels et al. n.d.), even when the potential multi-hour mobilization times for manned rescue teams are factored in. These results almost certainly represent a significant improvement over existing methods. Actual data regarding the time it takes rescue workers to thoroughly search an equivalent area using existing methods without the use of UAVs varies by situation and is difficult to quantify. However, available evidence suggests that it can take days to search the most significantly affected areas (Editorial Office of the Ishinomaki Kahoku 2014).

Additionally, there are many situations in which long-term search and rescue efforts are necessary and difficult to sustain by manned personnel (American Red Cross 2015). At times, survivors are discovered days or even weeks after the initial disaster strike. In these situations, swarms of UAVs may continue operating and searching with little human interaction needed to achieve a high degree of sensor coverage over a short period of time. A swarm of 10 UAVs using the spiral method on average was able to achieve 98.9% camera coverage in under 90 min. This rate slowly grows over time due to the unpredictable nature of the swarm patterns. Each time the pattern is re-flown, the positions of each UAV differ due to responsive flocking behavior. This element of randomness improved long-term search results and could be leveraged to a higher degree in a non-simulated system.

Despite these results, it is important to acknowledge that in order for this data to be useful at present, a human rescue worker would necessarily view and process the data so that survivors could actually be rescued. The swarms of UAVs simulated in this research are not intended to perform actual rescues, although such efforts are possible (Erdelj et al. 2017) (American Red Cross 2015). Therefore, this research acts as an initial step to demonstrate what lies within the realm of the possible using a behavior-based UAV swarming approach to disaster search and rescue. The research also provides suggestions for initial algorithms and search methods that have proven effective in simulation.

Areas of improvement

Despite the positive outcome evident in the simulation results, one persistent cause of delay across all scenarios was the battery recharge behavior. The flattening discovery rates in each simulation run are often caused when UAVs run out of batteries around the 20–25 min time-frame and return to the deployment location for a recharge. This happens repeatedly as batteries discharge, sometimes requiring a UAV to fly across the entire area to return to the charging station. Intelligent recharging to handle this behavior could improve results of the algorithms further. For example, future iterations of the search algorithms could integrate battery recharge into their designs, potentially triggering an auto-charge when a UAV flies within a certain distance of its recharge station while its battery life is below a certain threshold.

Applications

This section provides a sample of practical applications for which this research can be leveraged, as well as brief guidance on how to apply the research to actual situations. Although some aspects of this research are experimental, such as the simulated miniaturized FINDER sensor, other aspects, such as the use of a UAV swarm equipped with Wi-Fi and IR cameras, are readily usable today.

Deployment

At present, the recommended deployment configuration is 5 or 10 commercially available UAVs with parameters as good as or better than those specified in Table 2. Each UAV should be equipped with an infrared (IR) camera and loaded with the software used in this research. Additional work would be necessary to be done to pull data from the real, rather than simulated, IR cameras and other sensors.

To accept the data feeds from the UAV swarm, a ground station and/or mobile application could be developed. This application could be designed in many different ways, but the basics could include a top-down graphical map, photo, or blank image of the search environment with a built-in customizable distance scale such as the one used in the DroneLab simulation software. As survivors are located, the operator or operators could tap the screen to indicate their locations. At present, the recognition of humans from camera feeds is a challenging research problem in and of itself. Thus, the rescue personnel could view the data feeds of the various UAVs and mark locations on a shared map. The combination of cooperative UAV swarm, mobile application, and input from rescue personnel would form a viable mode of operation using technology available today.

Types of disasters

The approach described in this research is well suited to earthquake and tsunami disasters, as well as any destructive natural or man-made disasters in which environmental or political conditions present difficulty in the deployment of rescue vehicles or personnel. These situations include the presence of significant or hazardous debris, inundated terrain, and/or dangerous or hostile conditions. Due to limits in UAV communication range and battery life, the cooperative behavior is optimized over a 2-km2 area. Thus, the approach is particularly well suited to environments in which the presence of undiscovered survivors within a particular area is strongly suspected—for example, within cities, towns, villages, or other populated areas.

Civil/defense applications

In addition to civilian search and rescue, this research also has a number of applications that could apply to both the civil and defense sectors. With much detail omitted, the following are a list of potential applications in which the use of swarms of autonomous cooperating UAVs such as those simulated in this research could be highly valuable:

  • Intelligence gathering

  • Combat search and rescue

  • Smart object location acquisition

  • Incoming threat detection

  • Site assessment and map-building

  • Counter-UAS and counter-swarming

Ethical considerations

A number of ethical considerations surround the use of humanitarian robotics. One such consideration is the fact that swarming algorithms and autonomous robotic systems in general are inherently dual-use. These systems can often be used for civilian or military purposes. Although this research focuses on the use of UAV swarms for humanitarian disaster relief and the defense applications outlined above do not specifically recommend weaponization of this research or technology, such an outcome is possible.

The use of fully autonomous systems in weapons systems opens the potential for a new type of risk. When implemented on weapons platforms, autonomous systems can select and attack targets in ways that are faster and different than those performed by humans. Due to the potential for unintended collateral damage caused by these systems, the United States Department of Defense does not permit lethal fully autonomous weapons systems at this time (Human Rights Watch 2013). All weapons that include artificial intelligence must also include a human supervisor, or “human-in-the-loop,” for decision-making (US Department of Defense 2012).

In addition to its dual-use nature, other ethical concerns are inherent to humanitarian robotics research. Any time a machine is empowered with the ability to make or influence decisions that affect peoples’ lives, ethics becomes an important factor in system development and deployment (Sandvik et al. 2014). When designing a system based on this research, these factors should be among those considered as part of a comprehensive systemic ethics policy.

  • If an autonomous robot swarm is used to detect and report the locations of survivors, what issues might cause bias in reporting? Computer algorithms are developed by humans and cannot be said to be entirely free of bias and politics (Sandvik et al. 2014). Different algorithms, test cases, or detection equipment could create bias in the detection and reporting process.

  • Certain people or types of people may be reported over others. For example, automated face recognition techniques tend to be more effective on certain ethnic groups (Sharkey 2018). If such techniques are used by the swarm system to detect survivors, there is likely to be detection bias.

  • As a behavior-based approach creating emergent intelligence, how might ethics be examined differently in the case of this research than it would be in a centrally controlled system?

  • Do behavior-based artificial intelligence systems fall under the same sets of considerations as centrally controlled systems?

Another valid ethical concern in humanitarian robotics is the issue of neutrality. Neutrality can be compromised if UAVs are perceived—even if incorrectly—to be linked to a military or political power that has a stake in a humanitarian crisis (Emery 2016). Engagement with the local community is one way to approach this concern. However, with regard to the research described in this article, the UAVs used for this research are commercial or custom quadcopters commonly used by drone hobbyists. These UAV models are not likely to cause tension or misperception as might repurposed military UAVs.

Perceptions of the ethical issues surrounding UAVs also differ in different parts of the world. For example, in Europe and North America, concerns about the use of UAVs tend to include invasion of privacy, misuse by government or law enforcement, and fears of an aviation disaster. However, concerns in the Tana Delta of Kenya, where humanitarian drones were field tested, revolved around practical concerns such as the strength of the UAV’s camera, how far the system could operate, how quickly the drones could be deployed in an emergency, and who would be in physical possession of the system (Boyd 2014). Given this knowledge, it is important to consider the concerns of the local communities with regard to humanitarian drones, rather than to superimpose the concerns of aid-providing nations in the mistaken assumption that the concerns are identical.

Conclusion

The results of the study appear to greatly improve the availability of situational awareness data in the first few hours after a major natural disaster, which is widely considered one of the most critical SAR areas in need of improvement (Editorial Office of the Ishinomaki Kahoku 2014) (Erdelj et al. 2017) (Tait Communications 2012) (Bartels et al. n.d.) (Ochoa and Santos 2015) (Shimanski 2005) (Adams et al. 2007) (Riley and Endsley 2004). Simulation data generated during the study show that a swarm of just five standardFootnote 6 UAVs executing the spiral method of cooperative, behavior-based search and rescue developed in this research can consistently achieve a 98.8% 15-m radius sensor coverage after 4 h, reaching a goal coverage rate of 90% in 90 min. This same swarm of five UAVs consistently achieves a 92.5% 5-m radius sensor coverage rate in 4 h, reaching the 90% goal in 3 h. As more robots are added, the numbers improve even more. A 10-UAV swarm averages 98.9% standard sensor coverage after 4 h and reaches a 90% coverage in only 53 min. Equipped the precise 5-m radius sensor, 10 UAVs reach 96.9% coverage after 4 h, reaching the goal 90% in 108 min.

In many simulations, a swarm of 20 UAVs using the spiral method reached the 90% goal in less than 34 min, slightly over half an hour to discover 90% of all visible survivors within a 2-km2 area littered with waterlogged fields, damaged structures, fallen trees, and overturned piles of cars.

The spiral method is likely the quickest because it reacts more effectively to groups of survivors. The spiral method discovers clusters of survivors more quickly than the other methods through its spiral behavior module, which spirals outwards from an area in which more than a certain number of survivors are detected. If different types of data were sought after, a different set of behaviors might prove more effective.

Given the strong results of the simulations performed as part of this research, this approach to post-disaster assessment appears promising. Of course, in a real-world situation, the usage and availability of the data discovered by the UAVs is key. Also, although these simulations were designed to model a real environment with some degree of accuracy, the performance will certainly differ in an actual situation. However, this research does show that the use of swarms of UAVs with these algorithms has the potential to make a large amount of critical data available for consumption by rescue workers or other systems of interest. This research demonstrates the potential for high value in the area of disaster data acquisition using swarms of autonomous UAVs.

Change history

  • 01 June 2019

    Following publication of the original article [1], the authors reported errors.

Notes

  1. Standard parameters such as those of the commercially available DJI Phantom 4 quadcopter or similar model.

  2. Standard sensor coverage for this research is considered to be a 15-m radius detection range.

  3. The UK’s ORCHID Project seeks to create a disaster response system using a swarm of UAVs at a cost of around $2000 each (Kube and Zhang 1992).

  4. A Finding Individuals for Disaster and Emergency Response (FINDER) sensor is a sensor developed by the US National Aeronautics and Space Administration (NASA) to aid in disaster search and rescue. A FINDER sensor uses low-power microwaves to detect the heartbeats of buried disaster survivors up to 9 m into a mound of rubble. It has been used to successfully locate survivors in Nepal. A FINDER sensor is currently the size of a carry-on bag and is thus not appropriate for carry by a standard quadcopter. However, simulating how a future miniaturized version of this sensor, or others like it, might perform alongside a standard visual or infrared camera provides an interesting comparison for the purposes of this research.

  5. The DroneLab simulation software, as well as the UAV controlling software, may be available upon request to the (Institution omitted for blind paper submission) or through request to the paper’s author. At the time of this writing, the software is not public domain.

  6. Standard parameters such as those of the commercially available DJI Phantom 4 quadcopter or similar model.

References

  • A. E. S. M. Staff Member, Interviewee, Description of events at Arahama during the Tohoku tsunami. [Interview]. 2017.

  • Adams AL, Schmidt TA, Newgard CD, Federiuk CS, Christie M, Scorvo S, DeFreest M (2007) Search is a time-critical event: when search and rescue missions may become futile. Wilderness and Environmental Medicine 18(2):95–101.

    Article  Google Scholar 

  • Alley RE (1992) Problems of search and rescue in disasters. In The Management of Mass Burn Casualties and Fire Disasters. Dordrecht, Springer Netherlands. pp. 175–176. http://doi.org/10.1007/978-0-585-33973-3_2..

  • American Red Cross, “Drones for disaster response and relief operations,” 2015.

    Google Scholar 

  • Arnold RD, Wade JP (2015) A definition of systems thinking: a systems approach. Procedia Computer Science 44:669–678.

    Article  Google Scholar 

  • Balch T, Arkin RC (1998) Behavior-based formation control for multi-robot teams. IEEE Transactions on Robotics and Automation 14(6):926–939. http://doi.org/10.1109/70.736776.

    Article  Google Scholar 

  • Bartels R, Herskovic V, Monares A, Ochoa SF, Pino JA, Roges MR (2010) A simple and portable command post to coordinate search and rescue activities in disaster relief efforts. In: 16th international conference on collaboration and technology. CRIWG, Maastricht.

    Google Scholar 

  • Boyd D, “Humanitarian drones: perceptions vs. reality in Kenya’s Tana Delta, The Sentinel Project, 2014.

  • Brooks RA (1999) Cambrian intelligence: the early history of the new AI. The MIT Press, Cambridge, MA.

    Google Scholar 

  • Editorial Office of the Ishinomaki Kahoku (2014) Surviving the 2011 tsunami: 100 testimonies of Ishinomaki area survivors of the great East Japan earthquake. Junposha Co., Ltd, Mejirodai, Bunkyo-ku, Tokyo.

    Google Scholar 

  • Emery JR (2016) The possibilities and pitfalls of humanitarian drones. Ethics and International Affairs 30(2):153–165.

    Article  Google Scholar 

  • Erdelj M, Natalizio E, Chowdhury KRAIF (2017) Help from the sky: leveraging UAVs for disaster management. In: Pervasive Computing, pp. 24–32, January–march.

    Google Scholar 

  • Frelinger D, J. Kvitky and W. Stanley, Proliferated autonomous weapons: an example of cooperative behavior, RAND Corporation, 1998.

  • Geospatial Information Authority of Japan, Arahama, Sendai on March 12, 2011, 2011.

    Google Scholar 

  • Guarnieri M, Kurazume R, Masuda H, Inoh T, Takita K, Debenest P, Hodoshima R, Fukushima E, Hirose S (2009) HELIOS system: a team of tracked robots for special urban search and rescue operations. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, St. Louis.

  • Hambling D, Watch 50 drones controlled at once in record-breaking swarm, New Scientist, 2015.

  • Human Rights Watch, Review of the 2012 US policy on autonomy in weapons systems,2013.

    Google Scholar 

  • Kennedy J, Eberhart RC, Shi Y (2001) Swarm intelligence. Morgan Kaufmann Publishers, San Francisco, CA.

    Google Scholar 

  • Koffka K (1922) Perception: an introduction to the Gestalt-Theorie. Psychol Bull 19:531–585.

    Article  Google Scholar 

  • Kube CR, Zhang H (1992) Collective robotic intelligence. In: Second international conference on simulation and adaptive behavior, Honolulu. MIT Press Cambridge, MA, USA.

  • Macintyre A. G, J. A. Barbera, E.R. Smith, Surviving collapsed structure entrapment after earthquakes: a time-to-rescue analysis, Prehospital and Disaster Medicine, vol. 21, ch 1, pp. 4–19, 2006.

    Article  Google Scholar 

  • Molina P, Pares ME, Colomina I, Vitoria T, Silva PF, Skaloud J, Kornus W, Prades R, Aguilera C (2012) Drones to the rescue! Unmanned aerial search missions based on thermal imaging and reliable navigation. In: InsideGNSS, pp. 38–47, July–august.

    Google Scholar 

  • Municipal Development Policy Bureau (2017) Ruins of the great East Japan earthquake: Sendai Arahama elementary school. United Nations Office for Disaster Risk Reduction, Sendai City.

  • Ochoa S, and R. Santos, Human-centric wireless sensor networks to improve information availability during urban search and rescue activities, Information Fusion, pp. 71–84, 2015.

    Article  Google Scholar 

  • Pinciroli C, & Beltrame G. (2016). Buzz: An extensible programming language for heterogeneous swarm robotics. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3794–3800.IEEE. https://doi.org/10.1109/IROS.2016.7759558

  • Post-Disaster Reconstruction Bureau (2015) Reconstruction of Sendai. In: Third UN conference on disaster risk reduction. United Nations Office for Disaster Risk Reduction, Sendai City.

  • Ramchurn SD, Wu F, Fischer JE, Reece S, Jiang W, Roberts SJ, Rodden T, Jennings NR (2016) Human-agent collaboration for disaster response. Journal of Autonomous Agents and Multi-Agent Systems 30(1):82–111. http://doi.org/10.1007/s10458-015-9286-4.

    Article  Google Scholar 

  • Riley J. M, M. R. Endsley, The hunt for situational awareness: human-robot interaction in search and rescue, in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2004.

  • Sandvik K.B, et al. Humanitarian technology: a critical research agenda. International Review of the Red Cross 96.893, 2014, pp. 219–242.

    Article  Google Scholar 

  • Sato Y (2015) Museums and the great East Japan earthquake. Sendai Miyagi Museum Alliance, Sendai City.

    Google Scholar 

  • Sharkey N, The impact of gender and race bias in AI, Humanitarian Law and Policy, 2018.

  • Shimanski C, Situational awareness in search and rescue operations, in International Technical Rescue Symposium, 2005.

  • Tait Communications, Race against time: emergency response - preventing escalating chaos in a disaster, Tait limited, 2012.

  • The Center for Remembering 3.11 (2015) Activity report of the center for remembering 3.11. In: Third UN world conference on disaster risk reduction, Sendai City.

  • Tohoku Regional Development Association (2015) Tohoku regional development association earthquake disaster response: march 11th, 2011 the great East Japan earthquake. In: Third UN world conference on disaster risk reduction. Sendai City, Japan.

    Google Scholar 

  • US Department of Defense, Autonomy in weapons systems, Directive Number 3000.09, 2012.

    Google Scholar 

  • Vásárhelyi G, Virágh C, Somorjai G, Tarcai N, Szörényi T, Nepusz T, Vicsek T (2014) Outdoor flocking and formation flight with autonomous aerial robots. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, Chicago.

  • Virágh C, Vásárhelyi G, Tarcai N, Szörényi T, Somorjai G, Nepusz T, Vicsek T (2014) Flocking algorithm for autonomous flying robots. Bioinspiration & biomimetics 9(2):025012.

    Article  Google Scholar 

  • Washington State Department of Transportation (1997) Visual search patterns, pp 177–191.

    Google Scholar 

  • Wiener N (1948) Cybernetics: or control and communication in the animal and machine. The MIT Press, Cambridge, MA.

    Google Scholar 

  • Williams M, Researchers envisage swarms of tiny drones for dangerous rescue missions, PCWorld,2015.

Download references

Acknowledgements

Not applicable.

Funding

Funding was provided by the Japan Acquisition, Logistics and Technology Agency and the United States Department of Defense. The authors are employees of these agencies.

Availability of data and materials

Please contact the author for data requests.

Author information

Authors and Affiliations

Authors

Contributions

RA carried out the research, wrote the research software, and wrote the manuscript. HY conceived of the study and participated in the research reviews and coordination. TT advised the research, participated in the design of the study, provided the background materials for the research, and participated in the research reviews and coordination. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ross D. Arnold.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arnold, R.D., Yamaguchi, H. & Tanaka, T. Search and rescue with autonomous flying robots through behavior-based cooperative intelligence. Int J Humanitarian Action 3, 18 (2018). https://doi.org/10.1186/s41018-018-0045-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41018-018-0045-4

Keywords