OSINT in Authoritarian Environments

OSINT in Authoritarian Environments

Open-source information in authoritarian contexts cannot be approached with the same frameworks often applied in more open or democratic societies. The surface may look similar—videos, statements, local news—but the structure beneath is entirely different. In places like Iran, public data is rarely just there. It is shaped, constrained, or released with intention—whether by the state, affiliated actors, or the informal networks that operate under pressure.

For researchers unfamiliar with the language, or detached from the cultural and political rhythms of the country, this difference is rarely visible. And when the analyst comes from third-party countries—particularly those that interpret Iran through the filter of Western liberal values, or reduce its internal dynamics to a binary of ‘good Iranians’ versus ‘bad regime actors’—the distortion becomes even deeper.

Open-Source in Closed Societies

Most people in authoritarian states do not suffer from a lack of information—they often suffer from too much of the wrong kind. The information space is deliberately saturated with low-quality signals, contradictory statements, and content designed to confuse more than inform. The challenge for the analyst, then, is not access. It is filtration.

This process—slow and often uncertain—only becomes possible under certain conditions which, in my view, must be met if the analyst is to produce work that is both useful and reliable. First, through lived experience. Not as a tourist or a visiting expert—although even that is better than no visit at all—but ideally through meaningful time spent inside the society. Second, through a solid grasp of the country’s historical memory—not just the official version, but how that history is internalised, contested, or neglected by its people. And third, through sustained attention to the political landscape over time. Enough to know which groups align with which interests. Enough to trace where a video likely originated, or which Telegram channel tends to push which line—or is more likely to amplify certain narratives. Without these layers, open-source analysis in such contexts is not really analysis—it is speculation, highly prone to distortion and error.

A few years ago, I was teaching digital OPSEC to a journalist. He was getting more and more focused on a country the West routinely labels as “unfriendly”. We drifted off-topic at one point, and I ended up laying out the same three principles I mentioned earlier—about the preconditions needed for credible open-source work in closed societies. He, who was genuinely sharp and intellectually confident, pushed back. He argued that a good analyst could overcome those barriers with time, rigour, and methodology. Then something came to my mind immediately.

I had recently come across a photo from Iran. A quiet street—narrow, a bit worn, probably in the poorer outskirts of a provincial city. A small group of people standing around, not much happening. I showed him the photo and asked what he could observe. He studied it for a while but struggled to find anything significant. To him, it was just another street scene. But the photo told a very specific story—if you knew how to read it. The group was gathered in front of a now-closed Christian church. Nothing marked it clearly as a church. The building looked like any other apartment block. The very small signage was in Persian, modest, even deliberately discreet. Unless you could read the language and intuit the context—both linguistic and urban—you would never know what you were looking at.

Since then, I have used this as a reality check. A kind of baseline. I take an image—maybe from Google Street View, maybe from social media, often something deliberately mundane—and I ask the analyst to describe what they see. Who lives there? What kind of work do they likely do? What are their concerns—immediate, long-term, imagined? Then I pull a second image. From a different class, different part of the city, sometimes even a different province. I ask the same questions again. And here is the thing: if both answers sound too similar—if the language is generic, if the distinctions feel cosmetic—then the person is probably not ready to form a reliable opinion on that demographic. Or that country.

With the advent of Generative-AI, this issue has only become more critical. I have personally come across photos—apparently of historic events—that, at first glance, seemed entirely plausible. Nothing looked off. The composition, the clothing, even the atmosphere felt right. But after reading the comments or doing a bit of research on the context, I was surprised by how many of these images turned out to be AI-generated. Sometimes they were loosely based on real photographs. Other times they were simply fabricated—constructed out of thin air, with no grounding in any real event.

As someone who is not a historian, I find it almost impossible to distinguish between real and AI-generated historical imagery based purely on visual cues. And those who can spot the difference—usually historians or specialists—do not do it because the AI has failed in some obvious way. They notice the subtle things. The kind of weapon a soldier is holding that would not have been in service at that point in the war. Or a recreated version of a well-known photo, where the body language has shifted just enough—the soldiers who were once talking now seem to be arguing. These are not glitches; they are narrative distortions. And so, for someone like me, it would be misleading—maybe even dangerous—to assume I can authenticate an image just because the person in it does not have 'unusual' fingers or their eyes look ‘normal’.

Anonymity and Risk for Sources

One of the fundamental challenges in collecting open-source information from authoritarian environments is the issue of anonymity—often a necessity, but one that comes at a cost to verification. Most individuals sharing sensitive information do so under fake names, using temporary or unverifiable accounts. It is understandable. The personal risk is high—detention, surveillance, professional consequences, or worse. But from an analyst’s point of view, this anonymity makes it extremely difficult to assess the credibility of the source. There is no track record to cross-check. No institutional affiliation. No easy way to distinguish genuine whistleblowers from opportunists or actors trying to seed disinformation or distract the public.

On the other hand, there are also cases where individuals—often brave and politically conscious—choose to publish under their real names and real accounts. That does not always make verification easier. For example, a university lecturer might publicly claim they have been placed on unpaid leave, detained, or threatened due to political expression. But unless the individual is already well-known, there are rarely any secondary or official sources that can confirm or deny the claim. In such situations, an analyst unfamiliar with the local dynamics might take everything at face value. Whereas someone who knows the environment well could find indirect ways to verify—usually through a combination of background knowledge, informal contacts, and sometimes HUMINT.

Those additional steps—moving from OSINT, WEBINT and SMI to HUMINT—are not only methodologically different; they require something else entirely. At a minimum, fluency in the local language. But even that is not enough. Not being a citizen of the country, no matter how well one speaks the language, almost always limits the kind of trust or access one can build. Ordinary people in repressive regimes will be reluctant to share any information with anyone, let alone a foreigner.

In authoritarian contexts, this dynamic is not just a practical limitation—it carries serious risk for the individuals on the ground. When locals share politically sensitive information with a foreign citizen, even informally or through private messaging, they can be exposed to severe consequences. In some cases, such acts can be framed as collaboration with hostile intelligence services or foreign agents. Depending on the profile of the case and the political climate, this can result in long-term imprisonment or even death penalty. The foreign analyst or journalist might remain outside the reach of local laws—so long as they are not physically in the country—but the person who shares the information does not have that protection.

No need to mention that authoritarian regimes often accuse innocent people of such crimes not because they believe them to be guilty, but in order to instil fear—spreading the sense that anyone, at any time, could be next. In such systems, if even the slightest pretext exists—an encrypted message, a foreign contact, a political tweet—it is almost certain that the individual will face serious consequences. Courts, where they exist, are rarely fair. And in many cases, there is no due process at all.

Conclusion

Not all OSINT tasks are created equally. Some require a completely different set of skills—both technical and contextual. Tracking aircraft, monitoring maritime movements, scraping WHOIS data, or tracing digital footprints might be enough for the kind of OSINT work that fits neatly into a short report designed for mass attention, but when the subject is a repressive regime—when the context is authoritarian—the standard toolkit will barely get you a step or two in. Beyond that, what is needed is not just technical fluency, but deep, long-term familiarity with the demographic in question. And more than that: a duty of care. A recognition that the people involved—those who share, speak out, or document—may be risking everything just to assert a fragment of agency over their lives and their country's future.

If an analyst does not understand basic concepts like plausible deniability, or is unfamiliar with HUMINT OPSEC principles, they should think seriously before engaging with politically exposed communities under authoritarian rule. This is not about gatekeeping. It is about realism. The ethical burden in such spaces is not abstract—it is real, personal, and often irreversible. Working in or around these contexts is not like solving a puzzle. It is more like crossing a minefield. An analyst who is unfit for that terrain is likely to do one of two things: produce a distorted, misleading account of what is happening—or put someone’s life at risk. Neither outcome meets the standard of professionalism. And neither should be acceptable.