top of page
Search

The Digital Divide: Architecture

  • Writer: JIPS
    JIPS
  • 3 hours ago
  • 3 min read

By Anna Prata Reichmann Tavares


ree

The conversation about the “digital divide” in artificial intelligence (AI) comes down to physical access. Who has advanced chips? Who controls the data? Who can attract and retain AI researchers and engineers? These three ingredients —compute, data, and people —are the fuel of modern AI. Without them, you can’t train or run advanced systems. Currently, they are heavily concentrated in a handful of countries, reshaping global power and “fracturing the world” between those with and those without. It is creating dependencies that echo the resource curse we have seen in oil economies. But physical access is only one part of the picture. There’s another divide that has gone largely unrecognized: architecture. Architecture in this context refers to the blueprints of the AI systems: the structure of the model itself, the way it processes inputs, and the environment or delivery platform it runs on. Similarly to control over physical access, control over architecture concentrates power in the hands of a few firms, raising concerns that the rules of the AI age will be written by those monopolies alone. 


Most of the world does not build its own AI systems. Of 81 large-scale models tracked globally, 43 were developed in the United States, 19 in China, and 6 in the United Kingdom, with just 10 emerging from the rest of the world. In short, a few countries, the US, China, and the UK, write the blueprints; everyone else imports them. When you adopt these systems, you don’t just import their capabilities but the assumptions and flaws of their model. Architecture travels with the systems; some fundamentals can’t be legislated away after the fact. 


Consider large language models (LLMs) like ChatGPT, Gemini, or Copilot. Unlike traditional software systems, which execute instructions of code in a separate controlled channel, LLMs blur the line between data and instruction. The very same text a model processes as data can also be interpreted as a command. This is by design, so you can ask a model to “summarize this email” or “write a program” in plain language without coding at all. It is also why these models are inherently vulnerable to prompt injection, where malicious instructions are hidden in text, which we have seen repeatedly in ChatGPT, Gemini, Copilot, and Einstein


OWASP now ranks prompt injection as the number one security risk in generative AI, and Microsoft has openly admitted it is one of the hardest security challenges in deploying AI. Researchers are experimenting and implementing defenses like “spotlighting,” which essentially tags external content in a prompt as untrusted. These guardrails are bolted on, not built in, so the risk remains. Thus, if you adopt the architecture, you are still adopting the risk. 


That’s the heart of the second AI divide: countries already left behind in computing are becoming dependent on architectures designed elsewhere, and with them, the vulnerabilities built into those designs. This architectural dependency is arguably harder to escape than the lack of access. Compute deserts, places with little or no access to advanced chips or data centers, could be filled with investment, subsidies, or other partnerships. But once a system’s blueprint is adopted globally, it becomes entrenched; nations and enterprises must then operate within a design they cannot fully scrutinize or readily correct. 

Enterprises and governments need confidence that adopting AI systems does not also mean importing vulnerabilities designed elsewhere. They need architectures that can contain and isolate risk. They also need systems that can adapt to local contexts, languages, legal frameworks, and cultural values. These two requirements are inseparable: without adaptability, security remains incomplete.


Granted, building secure and flexible systems is not easy. It requires major investment in research & development, computing, and energy, but autonomy in AI architecture is essential. The next frontier of AI security will not be defined by patching flaws, but by questioning the blueprints themselves and designing delivery platforms that embed security from the start.


That means building modular, jurisdiction-specific AI infrastructure that prioritizes sovereignty and control. This is not to race Big Tech on scale, but to make AI usable, trustworthy, and adaptable. Without it, much of the world risks being locked into architectures they did not design and being locked out of a chance to secure their own digital future.



ree

About the Author: Anna Prata Reichmann Tavares is a Master of International Affairs candidate at the School of Global Policy and Strategy, specializing in International Business & Management. She was born in San Diego, and after living 9 years abroad in Brazil and India, she is happy to be back. Her focus is on the intersection of technology and strategy and how we can prepare for the new wave of AI policy and security.


 
 
 
bottom of page