EMRA TECHNOLOGIES

Artificial intelligence and machine learning for unmanned vehicles

     Military experts are developing new enabling technologies to help unmanned aircraft, ground vehicles, submarines, and surface vessels swarm and make decisions without human intervention. What was once the realm science fiction writers is growing as unmanned vehicles are given more capability for autonomous decision making thanks to improvements in and machine learning. First, unmanned aerial vehicles (UAVs) took to the skies. In fact, the British military developed the first radio-controlled unmanned aircraft during World War I — a scant 14 years after the Wright Brothers’ first flight in 1903. UAVs really came into their own during the Vietnam War and have become even more prevalent and essential since then. As UAVs proved themselves more invaluable over the years, the Department of Defense (DOD) asked industry experts to bring the unmanned revolution down to Earth in the form of unmanned ground vehicles (UGVs) and even to the seas with unmanned underwater vehicles (UUVs). Though each domain had unique problems to solve, the difficulty communicating from the surface to UUVs meant a greater need for machine autonomy — the ability to make decisions without direct human input — made possible with artificial intelligence (AI) and machine learning (machine learning). Even though and methods allow for relatively easy contact with UGVs and UAVs, the prospect of “smart” unmanned systems on the ground and in the sky also is helping drive the autonomous revolution. The ability for unmanned systems underwater, on the ground, and in the skies with full autonomy requires a lot of processing power. Mike Southworth, the senior product manager at Curtiss-Wright Defense Solutions division in Salt Lake City notes that tying together all the different technologies needed to make a vehicle autonomous is a large source for the need for big computing power. “Analyzing large amounts of data to complete complex deep-learning algorithms can require significant processing capabilities. Think of the number of devices, such as cameras and sensors, that are required for an autonomous vehicle to safely drive without human intervention, for example,” Southworth says. “The more devices there are collecting data, the higher the competition for bandwidth. That’s why so much of today’s data processing for AI takes place within traditional computer data centers. “Relying on centralized processing exclusively at the data center has inherent limitations, however, including bandwidth, security, and availability,” Southworth continues. “Consider the paradigm where all of the sensor data is uploaded to a centralized data center for processing, and then edge computers wait for a response from the data center to execute a command. If relying on the cloud alone, there could be a tangible delay or latency, comparing when an application issues an instruction and when it receives a response. Autonomous vehicles may need real-time vision and perception for safe navigation, path planning, or active protection. Imagine the consequences in a battlefield scenario where an incoming threat is detected, but there’s a measurable network delay before any countermeasures can be taken. Lives may be lost while threats are not eliminated. High-performance embedded computing integrated onto vehicle platforms can potentially overcome those obstacles and enable deep learning on the battlefield.” When it comes to unmanned systems — no matter the domain — size matters. With the need for many integrated technologies to fit into something with a finite footprint, high-performance embedded computing (HPEC) helps make it all possible. “As an industry, we are always trying to get more out of the systems we develop. HPEC computing, when coupled with AI capabilities, brings forth a powerful computing solution in a SWaP [size, weight, and power]-optimized form factor,” says Valerie Andrew, who manages strategic marketing and communication for embedded computing specialist Elma Electronic Inc. in Fremont, Calif. “With the large number of data points in use within any given embedded system, having the high-performance infrastructure to manage and process that information, which in turn fuels AI and deep learning, gives unmanned systems access to on-demand intelligence that can be used for operational activities.” Andrew, who also is the Sensor Open Systems Architecture Consortium Business Working Group Outreach Lead, notes that the DOD’s move to interoperability through a common modular open systems architecture (MOSA), can remove barriers to development — even in unmanned systems built around AI. “Advanced learning technologies will become even more critical, not just, ‘do these systems work together,’” Andrew says. “This will be exponential in terms of system functionality, where more rapid development and deployment of unmanned systems will start to take place. For the DOD, this means having access to best-in-class technologies fast and affordably. For the warfighter, it means increased knowledge and decision-making abilities during a mission.” There are twofold benefits of using HPEC in independent or partially independent embedded supercomputers, says Dan Mor, director of video and general-purpose graphic processing unit (GPGPU) product line at Aitech Defense Systems Inc. in Chatsworth, Calif. “HPEC systems enable far more functionality to be housed in a smaller framework, so that processing of larger datasets can happen closer to the sensor, where it is needed most,” Mor says. “And ruggedizing these HPEC systems means computing power can be used in a wider number of remote and mobile locations. As these processing demands increase, SFF systems using GPGPU technology and AI-based solutions are providing a path to next generation embedded systems, poised to tackle the growing field of mobile, unmanned and autonomous vehicle technologies, bringing computing power to areas never-before conceivable.” Small form factor Aneesh Kothari, vice president of marketing at Systel Inc. in Sugar Land, Texas, says that unmanned systems can take advantage of on-board AI and machine learning to reduce liabilities brought on by the realities of operating in a contested environment. “High-performance embedded edge computing is critical to deployed AI mission success. Operating in a contested environment with restricted bandwidth and degraded communications make the tactical use of cloud-based computing and AI a liability. Computational processing capability must reside on-premise to ensure the low latency and near real-time speed demanded of AI-based applications. Advances in COTS (commercial off-the-shelf) technologies in recent years have allowed for practical embedded edge computing use in unmanned vehicles,” Kothari says. “Systel’s rugged embedded systems such as Kite-Strike and Raven-Strike integrate commercial hardware components such as video capture cards and encoders, and the latest NVIDIA Ampere-based and Jetson Xavier GPUs in small-form factor (SFF) rugged embedded computers, making them ideal for use in unmanned vehicles.” With high-powered embedded computers providing the processing power needed to run the “smarts” of the AI and machine learning technology keeping UAVs, UUVs, and UGVs on the move, there is a need to keep everything in the vehicle cool. Mercury Systems in Andover, Mass., provides several processing solutions for the military-aerospace sector. Mercury’s Karen Haigh, who is the company’s chief technologist, says that cooling is key for unmanned systems. “Higher, bigger processing power is needed for these unmanned systems, so when you compute, you generate heat. So, the better you can control the heat through a cooling system, the more you can get done in a smaller space. As things get smaller, it gets hotter in a more compact space. So you need to be able to cool it more effectively. And if you’ve got a system that you can’t stick a fan on the back of the computer to make it not get hot, you need to be able to do these things in these tiny spaces under high difficulty. “So, at the bottom of the sea bed, you’re not going to be wanting to have anywhere near the same kinds of solutions that you would like on your desk at home,” Haigh continues. “The more you militarize, the more you need a cooling breakthrough in there to come together nicely, to be able to give the capability to those systems.” Sagetech Avionics in White Salmon, Wash., provides transponder and UAS situational awareness solutions to the unmanned aerial sector in both the military and civilian market. The company’s CEO, Tom Furey, agreed about the importance of keeping unmanned systems cool. “Microelectronics continue to evolve and enable innovation in shrinking both the size and heat generated by high power electronics. For example, Sagetech achieves dramatic SWaP reductions with advanced microelectronics design that includes creative heat channeling to enable better thermal performance without relying on large, heavy heat sinks or low reliability components such as fans,” Furey says. “Heat dissipation is one of the most critical aspects to consider when designing an enclosure,” agrees Elma’s Andrew. “Because of the modularity of small-form-factor (SFF) systems, there is no one-size-fits-all, which increases the complexity of thermal management in today’s embedded computing systems. Relying on well-established design principles, based on a holistic systems approach, allows manufacturers to produce custom-tailored enclosures for modern electronics applications, while keeping design costs to a minimum and heat profiles stable.” The principal advantage of technology enabling autonomous unmanned systems is in the name — warfighters are kept further from harm’s way. Sending a UUV out to waters that are potentially mined means it is far less likely for a ship and her crew to be put in a dangerous situation. The same goes for recognizance work in the air and on the ground. “There is growing creativity in the defense community for how man-unmanned teaming may play out,” says Curtiss-Wright’s Southworth. “UUVs, for example, will be helping larger maritime vessels with detecting mines, doing rapid environment assessments (REAs), intelligence, surveillance, and reconnaissance (ISR), oceanographic data gathering, and harbor and coastal surveillance and protection. UGVs are also being developed for medical evacuation of injured warfighters. A downed soldier or ally could signal that an ambulance is needed, and an armored autonomous vehicle could then be dispatched, plan the safest route in and out of the area, and come to the rescue of the wounded. “Autonomous weapon systems may detect enemies or potential threats. Once a threat has been identified and queued, the system could fire back automatically or rely on a human-in-the-loop to make the decision, Southworth continues. “The same is true of equipment or supply transport.” AI and unmanned vehicles particularly are useful to protect groups of ships. “Convoys historically have been at risk of improvised explosive devices (IEDs) that can gravely injure soldiers,” Curtiss-Wright’s Southworth says. “Autonomous vehicles can now patrol ahead of convoys to detect IEDs, identify them, and mark the area. Having an unmanned vehicle in the front of a convoy could end up saving thousands of lives. Eventually, the entire convoy could be made up of unmanned vehicles.” Swarms of unmanned vehicles communicating and working together is another big potential payoff of unmanned and AI technologies. “Another application for unmanned vehicles involves swarm intelligence where a group of drones, usually UAVs, self-organize into a coherent swarm, flying in synchrony without colliding,” Southworth says. “Multiple drones, for example, can survey an area of the battlefield and identify any enemy threats within the view of any one of the vehicles. Rather than flying a pre-programmed route and following a pre-programmed position, each drone tracks its own position and velocity, sharing that information with the rest of the swarm. This way they can self-navigate around obstacles, avoid enemy fire, and explore areas where they might notice a large contingent of enemy warfighters.” Aitech’s Mor says artificial intelligence helps the warfighter by providing real-time information persistently. “There are several military applications already employing AI-based supercomputers, such as situation awareness systems, EW systems and drones as well as smart soldier and man-portable systems and augmented reality, but especially noteworthy is the growing set of UAVs using this technology,” Mor says. With some AI-based SWaP-optimized supercomputers offering an ultra-compact footprint, roughly the size of a cell phone, unmanned systems can achieve incredibly high performance with remarkable levels of energy efficiency. They can operate longer during a mission, offer better reliability and provide real-time data inputs by processing and transmitting that data back the main command center.” Other areas of unmanned systems innovation include building on existing vehicle platforms to extend the function of a single vehicle, Mor continues. “For example, a fighter jet may have several UVS synced up to its inflight control center, extending its reach from one large aircraft to include several smaller units that act as a mini army, all working together and controlled by the pilot of the larger aircraft. This effectively extends the amount of airspace one craft can cover.” Mercury’s Haigh notes that the small sizes achieved by unmanned systems coupled with high endurance keeps people out of danger. “Broadly speaking, if you’ve got an unmanned device, you can go into spaces that the human can’t. That might be, for example, going into Fukushima or Chernobyl or going down to the seabed and spending a lot of time there without having to worry about the limits of the human body. You’ve got these amazing sensor systems that you can expand your eyes and ears outside of where the human body can go and in a much safer fashion, even when it’s in the air, for example, and you’re just extending the reach of the sensing, but you still have the communications and it’s not necessarily dangerous to the human being.” In addition, rapidly-disseminated information helps commanders give orders faster than if they were relying on a warfighter with binoculars and a radio. “Adding AI/machine learning capabilities enhance rapid decision making at the front,” says Alex Wilson, the director of aerospace and defense industry solutions for real-time software specialist Wind River Systems in Alameda, Calif. “This provides the warfighter an advanced tool that can augment information and improve decision making speed and accuracy.” Elma’s Andrew agrees about autonomous systems are helping to drive the ability to make better informed decisions. “The military computing ecosystem, itself, is going through an evolution,” Andrew informs. “Take the joint efforts between the DOD, government agencies and industry over the past two years that have resulted in a collaborative effort to adopt a common platform through the development of an open standard. The Open Group Sensor Open Systems Architecture has enabled collaboration across different industry boundaries that were not achievable before. These are the same principles throughout military electronics development: the sharing of information and working together to provide a more sound and secure system to be able to make more informed decisions. Autonomous systems will help contribute to this intelligence as well. Wind River’s Wilson also says the combination of autonomous and swarming systems with traditional warfighters “will rapidly increase the force projection allowing a smaller force to engage over a wider front. This enables greater flexibility in engagements and provides warfighter options to prosecute the mission effectively.” AI and machine learning enables unmanned and manned systems like main battle tanks to offload some of the more monotonous situational work away from the crews at base or in the field. “If you think about a group of humans doing a task, if they’re well-coordinated, you can have a group of people accomplish something that a single individual can’t,” Mercury’s Haigh says. “If you’re thinking about looking for bodies in the World Trade Center, you can send your autonomous vehicles in to look for victims while the human is making the executive decision and making sure that everything is coordinated with the human response teams. So, your emergency response teams are focused once a victim has been found to go in and help that victim while the unmanned vehicles are doing the searching, the monotonous, unpleasant stuff that the humans aren’t necessarily going to be wanting to do.” AI can help offload some of the military’s more mundane activities, says Sandeep Neema, a program manager in U.S. Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) in Arlington, Va. “The navigation functions for that tank should be reasonably offloadable. The warfighter is free from those burdens and focus more on the technical aspects of the battle. The autonomous future may be far in the future, but in the near horizon, it’s not difficult to drive from point a to point b. What’s rea0lly hard is getting a situation understanding.” Neema’s primary role at DARPA is as a part of the agency’s Assured Autonomy program. He says DOD experts must ensure that military unmanned systems operate safely, and are constantly monitored, updated, and evaluated. Neema cites a trio of factors currently impeding the deployment and development of autonomous systems. First is that operator involvement is still necessary. “This not only severely limits operational gains but creates significant new challenges in the areas of human-machine interaction and mixed initiative control,” writes Neema explaining the Assured Autonomy program for the DARPA website. The second impediment is, “Achieving higher levels of autonomy in uncertain, unstructured, and dynamic environments, on the other hand, increasingly involves data-driven machine learning techniques with many open systems science and systems engineering challenges.” Finally, Neema writes that the third impediment is “Machine learning techniques widely used today are inherently unpredictable and lack the necessary mathematical framework to provide guarantees on correctness, while DOD applications that depend on safe and correct operation for mission success require predictable behavior and strong assurance.” Another concern is raised by unmanned systems acting autonomously when tasked with taking the lives of enemy combatants or destroying enemy vehicles, materiel, or infrastructure in unmanned vehicles that are currently remotely piloted like the MQ-9 Reaper UAV. “As AI technologies become increasingly sophisticated and prevalent it can be tempting to think of AI as a ‘silver bullet,’ says Systel’s Kothari. “While AI can offload a majority of the burden of the operator, achieving sensor fusion and automating analysis tasks that are essential when dealing with enormous amounts of raw data, and even as we move from ‘kill chains’ to more complex ‘kill webs,’ humans still very much need to remain in the loop.” Aitech’s Mor says, “Ethical questions are always peoples’ concern, especially in the defense industry, which is why the U.S. Department of Defense officially adopted a series of ethical principles for the use of artificial intelligence following recommendations provided to Secretary of Defense, Mark T. Esper, by the Defense Innovation Board. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation’s leading AI experts, with multiple venues for public input and comment. The adoption of AI ethical principles aligns with the DOD AI strategic objective directing that the U.S. military lead in AI ethics and the lawful use of AI systems. Mor continues, “AI is relatively new field in the defense market, and we are expecting a step-by-step fusion of AI into existing or new systems: human-controlled systems with partially AI implementations; AI-control systems with partial human interaction; AI-control systems with redundant human decision control; and AI autonomous systems. It will take some time to be able to count on these kinds of systems; in parallel, we will see more standardized AI implementations and ethical principle definitions. The European commission also is working on a ‘Trustworthy AI’ definition.
Apr 26th, 2021
source