Introducing Verbal Interaction to Enhance Virtual Simulation Training for Close Air Support in VBS3


January 14th, 2015 : By Colin Puskaritz :

The human mind is unique in its ability to interpret the world around us. This gives humans the ability to see things, observe details, interpret surroundings, and project future possibilities based on the information at hand. As demands for serious games and virtual training continue to rise, an increased level of realism is also required. For realistic back-and-forth voice interaction between a human player and a virtual agent, the solution often proves difficult. In this blog post, I’ll explore one of Discovery Machine’s development efforts to combat this dilemma. This particular effort takes place in the Virtual Battlespace 3 simulation (VBS), but similar capabilities can be created in a number of virtual environments.

View as seen from the video feed of an MQ-9 ReaperTo best describe the need for realistic voice communications, I’ll explore some of the apabilities of a recent Close Air Support (CAS) talk-on capability. In the case of CAS, an ability to hear a command, observe an environment, and make an intelligent decision using those pieces of information is crucial to creating realistic training results.

First, here is a quick introduction for those unfamiliar with CAS:

  • CAS is designed to coordinate attacks on recognized targets using aerial assets. CAS can be conducted using a variety of aircraft such as F-16s, A-10s, and MQ-9 Reapers to name a few.
  • There are three basic types of CAS, each of which has their own set of rules and restrictions.
  • CAS typically requires a Joint Terminal Attack Controller (JTAC) on the ground to send verbal communications to an aircraft to conduct each mission.
  • Common communications range from requesting information about an aircraft in the form of a check-in report, conducting a 9-line brief which provides the pilot with key information about their target to use during an attack run, and initiation of final clearances to carry out an air strike.

The bottom line is realistic verbal communication is the only way to assure affective elimination of threats.

Discovery Machine has successfully addressed the basic back-and-forth flow between a variety of air assets and ground-based personnel. Watch a video of our successes here. Now, Discovery Machine, with the support of ASTi, will address the more difficult, virtual Talk-On training scenario. A Talk-on is a voice guided control of an aircraft to a final target. There are a number of traditional difficulties with simulating this type of an interaction, but primarily problems lie with misunderstandings, lack of adequate communications, and an inability to adapt behaviors based on direct requests.

For instance, a computer has no context of color, location, or size. It has no notion of basic things, like a prison or a mosque, which contain features that are recognizable to most people. Beyond physical details, there are numerous ways to describe the same exact thing. No one person speaks the same and, in the case of a talk-on, this is no different. Having a voice recognition system which is fluid enough to adapt to commands is a major requirement to accomplish this goal. By combining Discovery Machine’s powerful artificial intelligence (AI) technology and ASTi’s Construct, a realistic talk-on training prototype was created.

To make the talk-on capability a reality, Discovery Machine’s AI techniques were used to create adaptive behavior models. Each behavior model created has three concurrent processes: one to perform situation awareness, one to perform a primary mission for a behavior, and one to handle all verbal communications received. This structure was essential to creating a reactive behavior which could execute realistic tactics and respond appropriately to commands.

Hierarchical View Showing the Logic of an Automated Discovery Machine MQ-9 Reaper

The situation awareness branch interprets the simulated world and looks at the aircraft’s visual surroundings. As commands are received in the communication branch, the situation will shift and provide the virtual pilot with new information. This allows the pilot’s perception of the surroundings to shift dynamically in real-time. The information transmitted, paired with the current situation, directly feeds the primary mission. This lets the automated aircraft adapt and respond based on an evolving interpretation of the simulated world. As details such as object descriptions, locations, distances, and directions are given, the situation changes and the aircraft responds both verbally and by adapting its behavior in the scenario.

Construct, a Voisus product includes ASTi’s simulated radio environment which delivers high-fidelity, realistic radio modeling. Construct’s speech recognition capabilities established the foundation for voice-enabled operations in this project. Using these speech recognition and text-to-speech capabilities to add interaction and communication to intelligent agents allows for realistic verbal communication. In this product, Construct interfaces seamlessly with the Discovery Machine Behavior Engine for VBS to provide a complete training solution. By relaying key information to Discovery Machine’s communication server, Construct enables the aircraft to use information transmitted to adapt its behavior and respond in a realistic way.

In an upcoming webinar to be held Wednesday, February 25th, Discovery Machine and ASTi will expand upon these features and showcase how, with voice commands and AI, a realistic virtual aircraft can be controlled in a training simulation using talk-on communications. A live demonstration and discussion will be included in this exciting webinar. Register here today!

Sign Up Today for Our FREE Webinar & See a Live Demonstration of Our Talk On Close Air Support Capability in VBS3!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>