New technology does not enter the world mature. Its maturity results from continuous creative efforts, often hampered by setbacks and disappointments, with an eye toward favorable outcomes. Both setbacks and achievements are gauged in reference to independently verifiable benchmarks by which the technology is evaluated. The development of a range of commercial and military technologies powered by artificial intelligence (AI) — including semi- or fully autonomous machines — is no exception to this. Indeed, the technical underpinnings of AI are rapidly becoming a reality for regional and global politics. Disproportionate attention is given to the world’s technological heavyweights, the United States and China, as they seek an innovative edge in their great power competition. But Gulf states like the United Arab Emirates (UAE) are moving at breathtaking speed in AI investment and adoption, with significant implications for the U.S.-UAE bilateral relationship.

Our hope here is to leverage the momentum in both states by anchoring further ideas for political, diplomatic, and military cooperation in the technical evolution of AI.

The UAE’s AI ambitions

The UAE has made clear its intention to become a world leader in AI by 2031 in its national AI strategy, complete with the goals of developing a “fertile ecosystem for AI,” creating a digital infrastructure “essential to become a test bed for AI,” and matching world-class research capabilities with industries, among others. It is already making headway in AI development: Just as OpenAI followed its release of ChatGPT with GPT-4, Abu Dhabi’s Technology Innovation Institute (TII) launched its “Falcon” large language model (LLM), a foundational LLM with 40 billion parameters, in March of this year. As LLMs like GPT-3 boast 175 billion parameters — which are simply numbers within AI models that determine how an input is transformed into an output — Falcon’s significance lies in its comparative thriftiness in size and computing power without sacrificing quality.

Abu Dhabi’s AI strategy also dovetails with its plans to build up its defense industry. The UAE has enlisted the help of defense contractors such as L3Harris and Boeing to establish a machine learning hub and an uncrewed systems-focused Center of Excellence, respectively. The state-owned defense conglomerate EDGE Group has made parallel moves of late, in one instance acquiring a majority stake in Estonian robotics firm Milrem and unveiling 11 new uncrewed and autonomous systems at the International Defence Exhibition and Conference (IDEX 2023) in another.

The UAE’s efforts in AI and adjacent technologies come amid a broader move toward “digital transformation” among a subregion of Gulf states, and as calls for a regional Middle East and North African (MENA) science, technology, and innovation strategy come into focus. And it is also a reflection of the growing influence and ambitions of non-Western AI powers across the world — a club that includes Japan, Korea, and India.

What AI means for US-UAE relations

The UAE’s AI capabilities and ambitions should be considered in the context of U.S.-UAE bilateral relations. Although AI and broader technology ecosystems are heavily concentrated within the United States and China, the growing technological momentum in the UAE is likely to have significant ramifications for the region. Not only will the geopolitical faultlines produced by U.S.-China competition make smaller states like the UAE more significant for American interests in the region, but the country offers a potential for AI innovation that avoids the quasi-messianic visions to which some American developers have succumbed — a matter of both technical and political significance.

So, where are the U.S. and the UAE currently aligned when it comes to AI?

The UAE views its AI development as a long-term endeavor and as a pillar of the country’s standing in the 21st century — a century that Abu Dhabi believes will be about AI and technology innovation, areas in which the UAE is determined to be a leader. There are ample indications that the U.S.’s AI efforts are also geared toward the long term. The U.S. launched the National Artificial Intelligence Initiative to “ensure continued U.S. leadership in AI R&D.” Recent bipartisan legislation, such as the Chips and Science Act, and the unilateral export controls that the Biden administration is rapidly coordinating with allies also point to a very specific, long-term vision of technological innovation, outcompeting China, and maintaining America’s position within the global order.

There have, furthermore, been notable areas of military and political bilateral cooperation between the U.S. and the UAE. Militarily, the U.S. Naval Forces Central Command’s Task Force 59 — known for its rapid integration of AI with uncrewed maritime systems — and the Emirati Navy completed the two countries’ first ever bilateral uncrewed exercise in February 2023. Senior officers from the UAE also participated in the International Maritime Exercise in March 2023, which saw significant use of uncrewed and AI systems.

The U.S. and the UAE have also signed a joint statement on cross-border data flows, emphasizing the protection of citizens’ data and a system of interoperability to enhance commercial activity across borders, with the objective of making their regulatory frameworks interoperable and providing legal clarity for data-reliant operations.

How can these existing bilateral steps, coupled with a broader understanding of the geopolitical environment and the technical trajectory of AI systems, serve as a foundation for further U.S.-UAE cooperation?

Efforts at cooperation must be creative — what works for the United States in the context of AI may not always work for the UAE, and vice versa. The U.S. and UAE should use their initial areas of cooperation as springboards for joint AI-related efforts that leverage the distinctive attributes of each state.

The guiding spirit should be this: Generative AIs — exemplified by systems like ChatGPT — represent remarkable but early and relatively immature forms of AI. This takes what is implied by existing political and military cooperation — that interoperability of commercial data and the joint testing of defense technologies are mutually beneficial steps — and extends it to problems shared by both states that require targeted and innovative approaches.

Recommendation 1: Foster joint scientific and academic research, projects, and workshops that address foundational issues in state-of-the-art AI systems.

A recurring problem for even the most advanced LLMs is that of hallucinations — plausible-sounding text that is nonetheless factually incorrect or nonsensical upon closer analysis. Meta’s Chief AI Scientist Yann LeCun believes this may be a result of a foundational problem in these models, namely that they lack an understanding of “the underlying reality that language describes.”

As the technology undergirding models like Falcon is already being considered for use in UAE government operations, as well as being watched closely by the U.S.’s Defense Information Systems Agency, such technical problems ought to be addressed with rigor. This represents an opportunity for the U.S. and the UAE to bring together the talent each state possesses and leverage the momentum both are currently experiencing toward the mitigation or resolution of this technical problem. This is not incompatible with Mussaad M. Al-Razouki’s recent recommendation to establish a “network of science centers of excellence” in MENA countries. The U.S. would do well to be a part of this effort by fostering cross-border intellectual exchanges of a targeted nature, achieving an early foothold in a broader scientific endeavor.

Recommendation 2: Establish a bilateral working group on the development of benchmarks used to test and evaluate generative AI systems for competencies in a range of domains. Ensure sufficient flexibility for the transfer of such work with other MENA states.

The rollout of generative AI-enabled products has only added to the recognition that the standards by which AI systems are evaluated ought to be the result of interdisciplinary coordination. This refers not only to oft-cited ethical concerns, but also to uncertainty as to whether generative AIs are merely tools to be used dispassionately or forms of human-like intelligence to be engaged with intimately. This uncertainty is not only technical; it has significant social and political ramifications, too. Because commercial innovation tends to set the agenda for AI research and development, this uncertainty is often carried over from the private sector to the public sector rather than addressed directly.

Several diverse, and by no means unified, voices have highlighted the difficulties involved in evaluating AI systems. AI expert Gary Marcus routinely throws cold water on the near-regular rush of bold claims regarding generative AIs’ capabilities, even highlighting the importance of benchmarking in a testimony to the U.S. Senate this month. Computer scientist Kenneth Stanley, formerly of OpenAI, calls for researchers to take note of what LLMs can never do, to get a sense for what changes in “fundamental architecture” are required for their advancement. Computer scientists Arvind Narayanan and Sayash Kapoor expose the sheer ineffectiveness of using standardized human tests for LLMs, while others, like Ernest Davis, highlight the background knowledge that humans take for granted in their foundational skills. Finally, recent research by Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo drives home the importance of understanding AI systems’ capabilities by suggesting that the perceived “emergent” abilities of LLMs are nothing more than a “mirage” based on flawed methods of evaluation.

This is an opening for the U.S.-UAE relationship. Recommendations on fortifying states’ domestic AI industries frequently coalesce around attracting and retaining highly specific machine learning expertise, with the difficulty of the math involved seen as daunting. But this is too timid for the momentum in both states and too uncreative for a diplomatic agenda. An intentional hammering out of benchmarks used to evaluate the basic competencies of generative AI systems by a highly interdisciplinary working group comprising scientists, philosophers, and engineers is a worthwhile goal.

These benchmarks serve three purposes: First, to make clear what the nature of generative AI is and to explore our human responses to these remarkable machines in a sober-minded manner. Second, to enable commercial enterprises to adopt and integrate this technology in an informed way that is interoperable between the U.S. and the UAE. Finally, to allow a range of basic competencies to have rigorously worked out benchmarks without pre-committing each state to sharing advanced, AI-enabled military technology that significantly exceeds the technical capabilities of systems involved in joint military exercises.

It is worthwhile to consider the broader use of the working group’s efforts within other MENA states, particularly as China flexes its tech muscle in Saudi Arabia. One of the most striking aspects of LLMs is also one of their most banal features: They all share the same fundamental limitations. The implication is that a sufficiently refined set of benchmarks for LLMs can be widely applicable across state and corporate lines. Such benchmarks would not be final as research proceeds rapidly, but would soften the landing for their commercial adoption. It would also allow the U.S. to deepen its relationship with the UAE through a technical lens while simultaneously signaling an openness to remain part of an active push for indigenous innovation in states like Saudi Arabia that take the generative AI boom and the broader landscape seriously.

Recommendation 3: Reconceive Responsible AI in military affairs as both technical and geopolitical.

Finally, the UAE should commit to Responsible AI principles in military affairs. While the UAE did not sign on to the “Call to Action” on the responsible development, deployment, and use of AI in military affairs at the February 2023 REAIM Summit in the Netherlands, such a commitment should be viewed as both technical and geopolitical. By accumulating Responsible AI capital, the UAE can offer the U.S. something it increasingly needs in the AI development space: perspective.

The UAE is a comparatively small state, but one with technological momentum. It is riding a wave of domestic development. The U.S. is a massive ecosystem with diverse commercial actors and enormous resources. But its AI sector’s agenda is often set by a handful of commercial entities that promote uncertainty over AI as a tool or as intelligence. The UAE’s adoption of Responsible AI principles in its engagements with the U.S. can act as a form of leverage by cutting through the technological fog that massive concentrations of technological power produce. Responsible AI — implementing self-imposed restrictions on certain forms of AI research and development in military affairs — is consistent with innovation.

Conclusion

The technical capabilities of AI systems are a matter for continuing research and debate. But these technical matters are no longer the exclusive purview of machine learning and other AI researchers, nor are they of interest only to the world’s largest economies. The assumptions defense and foreign policy analysts hold about technological innovation, and the technical underpinnings of AI applications especially, are now geopolitical content.

As the United States re-orients its position in the Middle East and “West Asia” during a broader pivot to the Indo-Pacific, opportunities for the deepening and expanding of relationships in these regions can begin in the space of the technical and move outward. Here, we have argued that the United States and the UAE are poised to expand their own bilateral relations in exactly this fashion. It would be to the benefit of each.

 

Vincent J. Carchidi is a Non-Resident Scholar with MEI’s Strategic Technologies and Cyber Security Program and an Analyst at RAIN Defense+AI. His work focuses on emerging technologies, defense, and international affairs.

Mohammed Soliman is the director of MEI’s Strategic Technologies and Cyber Security Program, and a Manager at McLarty Associates’ MENA Practice. His work focuses on the intersection of technology, geopolitics, and business in emerging markets.

Photo by STEFANI REYNOLDS/AFP via Getty Images


The Middle East Institute (MEI) is an independent, non-partisan, non-for-profit, educational organization. It does not engage in advocacy and its scholars’ opinions are their own. MEI welcomes financial donations, but retains sole editorial control over its work and its publications reflect only the authors’ views. For a listing of MEI donors, please click here.