The Upside-down OODA Loop

How humans helped weaponize machines against humans

Hatteras Hoops
11 min readNov 3, 2024
Photo by Scott Rodgerson on Unsplash

Artificial Intelligence has taken the world by storm, shifting the Overton Window drastically in 2023. The ability to rapidly generate voluminous and substantiated logic, text and audio-visual arts exploded onto the scene with many across industry asking how they might better harness the power of Large Language Models (LLMs) and the broader field of AI. While many argue that the veracity of the AI trend amounts to mere hype, debate and discussion have filled the virtual town square. The societal debate on AI adoption sharply and exclusively points to either positive or negative implication. How can we harness the good while avoiding the bad? Perhaps AI is such a disruptive force that negative connotations about it need to adapt to a new world, an argument that ‘old world rules’ need to adopt to modern technological realities. An essential question remains: to what extent can machines be accountable?

Security pros tend to warn that the sky is falling, that doom will most certainly follow the gloom. In practice, more than ever, we are hearing that the application of AI is not just an ‘either-or…’ risk but rather a ‘yes-and…’ reality. With so many people exploring AI’s utility it’s not surprising that adversaries are rapidly progressing their ends through a variety of AI means to include the operationalization of synthetic identities.

“The problem is an erosion of trust and impact to basic functions of society.”

For years, adversaries have developed and employed synthetic identities to disguise, extort, impersonate, manipulate and trick targeted people through sophisticated denial and social engineering attacks. The scalability for the malicious synthetic personas will challenge our readiness to combat this novel use of AI. Machines only need tuning to effectuate attacks, and people are helping machines to target we mortals.

AI technology and applications are as much threat as mitigation. Everyone in tech understands the double-edged nature of AI. So, what is an antithesis to the emerging trend of synthetic personas and related identity exploitation? Where do we start, and how do our security practices need to evolve? What risk do synthetics really pose and how is this a problem in need of solving?

Synthetic Identities and Personas Defined

Synthetic Identities and Personas are two separate yet related concepts. The relationship is primarily between the ability and capacity of threat actors to exploit business operations.

Synthetic Identities are artificially fabricated “people” often counterfeited or fictitiously generated through a series of manipulations of real identity processes or technologies — an ability to trick systems into granting unauthorized access. Cyberark defines a synthetic identity as a “counterfeit identity formed by combining a mix of genuine and false information, blurring the line between physical and digital characteristics that identify a human being.” [1] Synthetic identities present threat actors with a novel and hard to mitigate method for (principally) exploiting the human domain across any organization. Threat groups that employ Impersonation-as-a-Service (IMPaaS) for access operations and information exfiltration are the most likely and most immediate benefactor from synthetic identity generation.

Synthetic Personas are a scaling of capacity allowing organizations to more effectively execute research and exact targeted messaging at role-based, occupational traits and characteristics — a capacity to generate and manage identities at scale. There are varying definitions spanning marketing to security, so it is important to contextualize the definition with the threat perspective. Organizations that routinely leverage synthetic identities in security defeat operations are just as constrained by business realities as finance-oriented and public-facing organizations. Synthetic personas provide a many-to-one relationship with a synthetic identity. More precisely, synthetic personas are a facet useful in scaling their security defeat activities starting with identity operations (ID Ops).

“We must reverse decades of convincing ourselves that humans are the weakest link into a mindset that sees people as empowered as one of the strongest links in our security armor.”

From a threat lens, ID Ops employ impersonation techniques, tactics and procedures (TTPs) to evade detection, minimize attribution and gain unauthorized entry. With the advent of synthetic identities, threat actors can circumvent traditional credential theft TTPs and more rapidly look at lateral movement and privilege escalation. Whereas traditional Access-as-a-Service (AaaS) agents buy and procure existing identities, synthetics can generate new identities.

Synthetic identities can also be seen as part of social engineering TTPs through the novel application of deep fakes and other fabricated real and near-real-time media. Both fabricated identities and pretexting techniques for social engineering attacks fall under the synthetic identity category. Both still come at a cost due to the time, focus and energy needed for meticulous and tightly knit legends as part of a robust signature management plan. The effects of hybrid virtual and real observations can result in fooling automation systems and human interfaces.

Synthetic personas, on the other hand, are templates for the management of numerous synthetic identities across industry and regions. Synthetic personas make the types of synthetic identities needed repeatable and scalable. They streamline communication requirements, simplify the need for detailed development of legends and provide a methodology limiting the need for signature management planning. The application of synthetic personas makes good business sense. If you are a threat actor, this lowers the bar for entry and increases the temperature on a global stage. Applying synthetic personas is an optimizer for weaponeering access operations.

Synthetic identities and personas might not be new. What is new is the availability of artificial intelligence capabilities that can automate and further streamline this process. But, the AIs can help us detect elements of these TTPs and equally at scale. Understanding the problem and potential here are as important as addressing it.

Pandora’s Box: old concepts compounded by new technology

Synthetic identities and personas are directly related to an attack surface. They do not expand the attack surface but rather focus and broaden penetrating fires and effects onto the doors of an organization’s exterior. AI is a key component increasing the sophistication of access operations as a tool in social engineering attacks, as an accelerant for AaaS and as a mechanism expanding the applicability of IMPaaS.

The perfect storm of narrative and negligence lowers our horizon. The problem lies at the confluence between the idea that people are the weakest link and that preventing illicit access is too hard, too rare of an occurrence and too hard to capitalize from the business. In reality, the problem stems from a lack of preparedness and prioritization, both habits found in resilient organizations. Additionally, many have stood aside or asked others to stand aside in the AI race until controls and governance are put in place further delaying our understanding for strengths and weaknesses. Adversaries are learning to adopt AI at a rapid pace. Relatively few organizations are adequately prepared for the scale of VUCA events most practitioners acknowledge will substantiate this generation’s greatest dilemma.

The problem is an erosion of trust and (closely related) impact to basic functions of society. It is a problem compounded by the financial reality that dysfunction in society is not for private industry to solve. It is easy to see why organizations are no more compelled to step up than you or I.

The opportunity loss (if left unaddressed) is a degraded organizational capability to make decisions. The most important aspect of an organization, as I have recently argued, are in fact the people and the ability of them to make informed decisions as one. We must reverse decades of convincing ourselves that humans are the weakest link into a mindset that sees people as empowered as one of the strongest links in our security armor.

Dealing with synthetic identities at scale will threaten trust as a top-down corrosive effect starting with decision-making and further distorting the relationship between people and machines. The time for prioritizing your people is now!

Trust in organizational decision-making

In many ways, for a plethora of reasons, we have not actively contemplated machine emplacement in the observe, orient, decide and act (OODA) loop. We have incidentally replaced the trust instilled in leadership, in experts and other humans with that of a machine. It’s natural then to see how so many decisions influenced and informed by machines could lead us towards computed outcomes with baked-in bias.

Trends in decision-making have become too data driven, or at least with an intention of being more quantifiable and logical. AI has been placed firmly in the orient phase of the OODA loop with some bleed- over in decide and (in machine-to-machine environments) act phases. [4] This limits nuanced understanding, offloads the critical thinking muscle to a machine and transfers accountability of action to that which is incapable of accountability. In many ways, the desire for data driven and quantifiable elements of decision-making are righteous but limit the potential for active genius in human decision-making.

More deliberate AI emplacement consideration will increase the potential of pragmatic, contextualized and human-informed decision-making. AI is particularly good at harvesting information and presenting data for analysis where it is ultimately converted into intelligence and wisdom. From a friendly perspective, emplacing AI in the observe phase neatly places the horsepower of a 1,000,000+ interns in the position of chief data gatherer and compiler. AI expedites how data is fed forward — albeit with help. Humans still have a place in observe and orient phase, but it squarely puts the human in the seat of contextualization and decisioning.

Through the threat lens, synthetic identities can be used to manipulate and overload a friendly OODA loop. AI presents an opportunity to detect true positives and provide for human interfaces adjudicating false positives if emplaced nearer to the observe phase. If humans are exclusively responsible for observations, it’s easy to see how our trust and bandwidth may become overloaded, especially when synthetics mimic and deceive core matters of the human condition. The argument here is that we need to more deliberately consider where and why AI is emplaced at various phases of the OODA loop.

Personifying the risk

Since at least 2021 the market has seen an increase in volume and sophistication of attacks using synthetic identities. The negative impact has been substantial, ranging from $25–40M per incident with many leading to the loss of employment for executives and depreciation of market capitalization in the billions. The trend of consequences has curved abruptly upward. [3, 4] Furthermore, the ubiquity of artificial intelligence has only increased which equates to a increased probability of threat taking action.

In February 2024 Arup, a British engineering company with a presence in Hong Kong, was deceived into transferring the equivalent of $25 million across more than ten transactions. AI was used to create a near real-time perception of a conference call when the worker was asked to join. In the conference call, fabricated executives instructed the worker to make the transfer. Shortly after, the worker confirmed the fraud and filed a complaint with the local authorities.

In other instances, real identifying artifacts such as licenses and passports have been used to generate synthetic identities advancing fraud schemes in banking and asset management. Social media companies are taking proactive measures to limit and block provocative synthetic identities while U.S. Congress debates legislation to address this rising tide of manipulative information. Policy takes time but the shift in the Overton Window is not progressing fast enough eventhough two U.S. states outlawed the use of deepfakes in 2019. This risk, dreamed up by futurism since atleast the 1940s has suddenly become reality. Our ability to consider what was once abstract is now quite tangible.

Threat groups have little to worry about. Administrative controls from government have little impact on the trend. Public policy is lagging and adjustments to Tort law have an unaddressed attribution problem. The trend is steeply inclined upward with no clear countermeasures agreed. At least in the European Union, the AI Act has been passed providing a solid framework for regulating licit AI users. [5] It may not be a cure-all, but it is an attempt at progressing law that can deter and be used in prosecuting criminals leveraging synthetic identities.

A framework for action

We need to actively assess the relationship within decision-making between people and machines to reorient our OODA loop. The OODA loop is how organizations make decisions and adjust action on past decisions. Understanding the cognitive layer of the human domain is essential. In the end, nearly all endpoint computing and processing requirements are for human processing. How information comes to us matters. Closing the relationship between people, machines and the decision-making process is ever more important to define today.

Clarifying the roles of humans and machines requires analytic work that explores decision-making phases and the relationship between humans and machines across the spectrum. The determination on the relationship per phase and the conditions under which decisions should be made can be strengthened by outside-in modeling to close down vulnerabilities in the decision-making process. I have not found a large body of work on establishing a framework for defining decision-making’s relationship with human and machines (in particular AI). Here is my current proposed framework:

  1. Analyze your decision-making process through the OODA loop model specifically for vulnerabilities in identity provisioning and access determination
  2. Specify which OODA loop phases require human functionality
  3. Classify conditions for AI emplacement as either AI support to human function or AI is supported by humans
  4. Model your OODA loop vulnerabilities outside-in (applying the CARVER Methodology [6])
  5. Establish mitigation courses of action, cost-benefit analysis and loss/gain analysis, recommending actions (or inactions) to decision-makers
  6. Implement decisions and evaluate synthetic identity events and related residual risk

There is more work to be done, more research to contemplate and deep thinking required to start to address the impact on trust stemming from synthetic identity abuse at scale. We don’t need to wait for widely recognized mitigation frameworks to harden our decision-making processes. We cannot wait for more public conversation on the role of of machines in decisioning, and we will likely not ever know all the nuances and facts.

Let’s start to be more deliberate in how we ask technology to enable decisions and where people should be instrumental in providing contextualized components of decisioning. The last thing we need to do is help weaponize machines against people due to hasty and careless action.

References:

[1] “Synthetic Identity,” CyberArk. [Online]. Available: https://www.cyberark.com/what-is/synthetic-identity/

[2] J. Johnson, “Automating the OODA loop in the age of intelligent machines: reaffirming the role of humans in command-and-control decision-making in the digital age,” Defence Studies, vol. 23, no. 1, pp. 43–67, Jan. 2023, doi: 10.1080/14702436.2022.2102486.

[3] N. Robins-Early, “CEO of world’s biggest ad firm targeted by deepfake scam,” The Guardian, May 10, 2024. [Online]. Available: https://www.theguardian.com/technology/article/2024/may/10/ceo-wpp-deepfake-scam

[4] H. C. Magramo Kathleen, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer,’” CNN. [Online]. Available: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

[5] “EU AI Act: first regulation on artificial intelligence,” Topics | European Parliament. [Online]. Available: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[6] “Think Like a Green Beret: The CARVER Matrix,” SOFREP. [Online]. Available: https://sofrep.com/gear/green-berets-and-the-carver-matrix/

--

--

Hatteras Hoops

Map dude. Security Professional. Leader. Extrapolator. Innovator. Advocate for Earth. War Veteran. American abroad.