5
Building the Future
Through its engagement in a range of unmanned aircraft research programs, the Federal Aviation Administration (FAA) can help move unmanned aircraft system (UAS) technology forward in keeping with the Department of Transportation mission to ensure “a fast, safe, efficient, accessible and convenient transportation system that meets our vital national interests.” One of the most pressing topics for the FAA’s consideration is the future role of increasingly autonomous systems, which have the potential to dramatically improve both the efficiency and the safety of the U.S. air transportation system. To realize these benefits, though, will require that risk assessment methods keep pace with advances in automation. By becoming engaged in research on autonomous systems and on complementary risk assessment methods, the FAA can help ensure that potential benefits of autonomy in aviation are realized without costly and unnecessary delay.
A major motive for increasing automation is to reduce the possibility for human errors that result in accidents. But while human factors play an important role in the majority of commercial aviation accidents (Shappell et al., 2007), human operators can also help prevent accidents by reacting appropriately to unexpected or unprecedented situations.
Automated systems, which respond in a deterministic way to predetermined conditions, can help to address previously identified hazards and their causes. Considerable advances are being made in developing perception and reasoning capabilities to address unusual yet previously identified situations, such as loss of communication with a ground control station or airspeed sensor corruption. However, automated systems may respond inappropriately in complex situations that were not anticipated in their design, such as situations where multiple hazards arise simultaneously in unanticipated combinations. Such situations can result in an accident when a human operator is suddenly required to take control (Strauch, 2017). Thus, while increased automation promises measurable benefits for system safety, it requires parallel advancements in effective human interfaces.
Autonomous systems that are adaptive and nondeterministic1 will have the ability to learn and continually
___________________
1 “Adaptive systems have the ability to modify their behavior in response to their external environment. For aircraft systems, this could include commands from the pilot and inputs from aircraft systems, including sensors that report conditions outside the aircraft. Some of these inputs, such as airspeed, will be stochastic because of sensor noise as well as the complex relationship between atmospheric conditions and sensor readings that are not fully captured in calibration equations. Adaptive systems learn from their experience, either operational or simulated, so that the response of the system to a given set of inputs varies over time. Systems that are nondeterministic may or may not be adaptive. They may be subject to the stochastic influences imposed by their complex internal operational architectures or their external environment, meaning that they will not always respond in precisely the same way even when presented with identical inputs or stimuli. Many advanced AI systems are expected to be adaptive and/or nondeterministic.” National Research Council, 2014, Autonomy Research for Civil Aviation: Toward a New Era of Flight, The National Academies Press, Washington, D.C.
improve their performance as humans do. This may help to maintain safety in complex environments by combining the computer’s advantage of processing speed and capacity with the ability to respond to novel situations. Similarly to automated systems, though, autonomous systems may not respond appropriately to situations that lie outside their design parameters and experience base, and it may be difficult for a human operator to step in and take appropriate action if the autonomous system fails to respond appropriately. And because human operators may rely even more on autonomous systems than on automated ones, operators’ ability to intervene suddenly may be further compromised. Conversely, a human operator may not trust an autonomous system enough to allow the system to maintain control if the operator perceives, rightly or wrongly, that the system’s actions are unsafe. Operators of self-driving cars, for example, have posted videos of themselves sleeping or sitting in the back seats of their vehicles (Davies, 2016). The ability of an autonomous system to respond more like a human comes with an additional downside: one cannot predict with certainty what a learning system will do in a given situation. In particular, it is impossible to guarantee the system will always respond in a safe manner.
Nevertheless, systems with higher levels of autonomy are proliferating in a range of industries. Potential benefits include significant benefits in productivity and quality (e.g., in robotic manufacturing), as well as in safety (e.g., an autonomous vehicle does not become fatigued). Additional research is needed, however, to develop risk assessment and risk management methods that will enable verification, validation, and certification of adaptive/nondeterministic systems for safety critical applications such as aviation.
Finding: Systems with high levels of autonomy have the potential to improve the operational safety of UAS. However, existing verification, validation, and certification processes cannot ensure that highly autonomous systems that are adaptive or nondeterministic can satisfy safety standards for commercial aircraft. For this reason, highly autonomous systems are not currently allowed for commercial UAS flying within the National Airspace System. Opportunities to increase the safety of UAS operations, and of aviation in general, through increased autonomy are being missed, however, due to a lack of accepted risk assessment methods.
Beyond autonomy, several key topics require continuing research to address the risks posed by UAS integration. These research topics can be identified in the context of the FAA’s own describe, identify, analyze, assess, and treat (DIAAT) process for safety risk management (SRM), as follows:
- Describe the system. Medium-to-large UAS will share runway space and all classes of airspace with manned aircraft; small commercial UAS will continue to proliferate, serving a variety of users and, in some cases, sharing airspace with manned aircraft in class E and G airspace; and hobbyist UAS will continue operating in class G airspace with minimal oversight. As autonomy begins to pervade other technology domains, the desire of some stakeholders to incorporate fleet-wide, continual learning into UAS will strengthen.
- Identify the hazards. UAS are subject to many of the same hazards as manned aircraft in terms of the vehicle, the airmen, the operation, and the environment, but there are also hazards unique to UAS, including “lost link” or failure to “see and avoid” conflicting traffic (Luxhøj and Öztekin, 2009).
- Analyze the risk. The threat to human life posed by UAS is toward people on the ground or in manned aircraft. Many of the specific risks that could lead to injuries or fatalities are unknown and are poorly approximated using existing methods, requiring new investigative tools; see Campolettano et al. (2017), for example.
- Assess the risk. The use of network data transfer to support UAS operations presents opportunities for real-time risk assessment using modern tools for data analytics. In parallel, the development of improved simulation capabilities supports quantitative risk assessment (Kochenderfer et al., 2008) for known risks (e.g., aircraft collision) and even the identification of rare hazards. Further, because of their unique risk profile (e.g., the absence of passengers and crew), UAS are often promoted as risk-reduction mechanisms (e.g., cellular data tower inspection). It is thus appropriate to consider comparative risk in an overall assessment.
- Treat the risk. Mitigations such as flight termination systems, “geofences,” and minimum-risk path planning methods can enhance safety, allowing operational limitations to be relaxed. Improved human interfaces that
ensure an appropriate level of cognitive engagement by an operator or supervisor can more fully exploit the complementary strengths of humans and automation.2
To summarize, a range of topics that are the focus of ongoing research can inform the FAA’s evolving strategy for risk assessment in the era of UAS integration. A short (and clearly incomplete) list includes the following:
- Testing and evaluation, verification and validation methods for autonomous learning systems;
- Human interaction with automated systems (e.g., effective human-machine interfaces for automated, multiUAS networks, and optimal cognitive loading in varying conditions); and
- Real-time data analytics (e.g., probabilistic inference using dynamic Bayesian belief networks, and classification using convolutional neural networks).
These topics, as well as many other topics also related to improved risk analysis for UAS, are the focus of major domestic and international research initiatives (see Figure 5.1) that cut across many technology domains. Many of these initiatives are aimed at accelerating the integration of learning and autonomy into essential technology domains, from energy to medicine to transportation. Developing risk assessments in belated response to the emergence of new concepts would stunt the pace of innovation and clog the technology transition pipeline with increasingly obsolete ideas. A better approach would be to develop risk assessment methods concurrently with developmental research so that promising technologies can be safely transitioned into use as they emerge. Consider, for example, the UAS Traffic Management (UTM) program, which is described in Chapter 4. In this effort, the FAA has presented a timeline for the operational implementation of UTM once it transitions from NASA to the FAA. NASA introduced the concept of UTM more than 3 years ago and has been working in open and transparent collaboration with industry throughout its development and validation of the concept, including multiple demonstrations building upon lessons learned from each. The FAA established the program as a research program 3 years ago, and it is not clear to the committee why the transition is taking so long.
Recommendation: In coordination with other domestic and international agencies, the FAA should pursue a planned research program in probabilistic risk analysis (PRA), including the aspect of comparative risk, so that FAA personnel can interpret or apply PRA for proposed technology innovations.
___________________
2 A “geofence” is a modification to a UAS’s navigation software that forbids the UAS from operating in or traveling into a restricted area such as the airspace around an airport. As an example, many hobbyist UAS will not activate when close to an airport.
REFERENCES
Campolettano, E.T., M.L. Bland, R.A. Gellner, D.A. Sproule, B. Rowson, A.M. Tyson, S.M. Duma, and S. Rowson. 2017. Ranges of injury risk associated with impact from unmanned aircraft systems. Annals of Biomedical Engineering 45(12):2733-2741.
Davies, A. 2016. Tesla’s Autopilot Has Had Its First Deadly Crash. Wired.com, June 30, 2016. https://www.wired.com/2016/06/teslas-autopilot-first-deadly-crash.
FAA (Federal Aviation Administration). 2017. Presentation by B. Crozier, FAA, to the National Academies Committee on Assessing the Risks of UAS Integration, September 26.
Kochenderfer, M.J., L.P. Espindle, J.K. Kuchar, and J.D. Griffith. 2008. A comprehensive aircraft encounter model of the National Airspace System. Lincoln Laboratory Journal 17(2):41-53, 2008.
Luxhøj, J., and A. Öztekin. 2009. “A Regulatory-Based Approach to Safety Analysis of Unmanned Aircraft Systems,” presentation at HCI 2009: 13th Int. Conf. on Human-Computer Interaction, San Diego, CA, July 19-24.
Shappell, S., C. Detwiler, K. Holcomb, C. Hackworth, A. Boquet, and D.A. Wiegmann. 2007. Human error and commercial aviation accidents: An analysis using the human factors analysis and classification system. Human Factors 49(2):227-242.
Strauch, B. 2017. Ironies of automation: Still unresolved after all these years. IEEE Transactions on Human-Machine Systems. doi: 10.1109/THMS.2017.2732506.