Skip to main content

Human Machine Teaming in the Age of AI

| Guy Howard

Imagine you take your car to an AI mechanic for repairs. Instead of a human attendant asking you about the car, you talk to ChatGPT for a few minutes about the problems and it devises a solution. Automated robots then roll out and spring into action, fixing the issues in a matter of minutes, instead of the hours that you’re used to waiting on a typical visit. This automated application of AI systems can save humans time and effort while, in many cases, also performing tasks more accurately.

The concept of active vs. passive has been discussed around cybersecurity tools for years. Passive tools monitor your network and provide information about weaknesses, threats, and intrusions but do not act on the information. The user is left to make a final decision. Active defense tools, however, can take action when detecting malicious activity. An active defense tool can block traffic, disable web services, or even restructure the entire network’s configuration. And it does all of this without needing human intervention.

So, you may think an active system is always preferable. Let’s return to our AI example. Now imagine you go to the doctor with an arm injury. You get an x-ray which is examined by an AI. The AI says your arm is broken and you need surgery. Further, it’s going to perform the surgery right there and then, without any human notification or intervention. In this case, maybe you’d prefer a passive AI system: one that forms a diagnosis, but then relays that information to a human doctor who makes a final assessment and treats the injury accordingly.

Cybersecurity is the next frontier for a new battle between active and passive tools, this time it’s AI. Numerous cybersecurity tools (including our own CYBERSPAN and NeuralNexus) apply AI models to protect networks. Some are active and some are passive. Neither is right or wrong, but users should consider the consequences of their choice. Active systems can respond in milliseconds, and in the cybersecurity domain milliseconds matter. However, if they get it wrong, they could cause unplanned and unexplained major disruptions to users of the network. Passive systems require a human response which may be slower, but guarantees consideration of ancillary factors that the AI may not be aware of.

This is why Human Machine Teaming is a critical concept in AI development. Particularly, with the proliferation of ChatGPT and its use cases, we must understand the consequences of automation and make conscious decisions about its implementation. The technology is revolutionary, but it’s also wrong… a lot. This doesn’t mean that we can’t use AI to automate tasks. It means that we must consider the risk involved in the type of decision before enabling an AI to take action. Fully autonomous systems are perfect for low risk, recreational activities such as sports or gaming. But when there are life or death consequences (military, health care) there must be a human in the loop to validate the decisions AI makes.

An AI can only be valuable to a human if it produces explainable output. The human partner must understand how an AI arrived at a decision and its basis before it learns to trust the AI. This is no different than how humans interact with other humans. We don’t trust someone to make critical decisions without understanding why they made the decision. We must hold AI to the same standard.