Most of the debate around AI and information manipulation focuses on deepfakes of real people. But the more significant long-term risk is not synthetic versions of real people. It is synthetic people who never existed at all. A fake journalist has no original photograph to match against. There is no thread to pull.
Engineer · Adversarial Researcher · Bootstrapper
I build tools
states actors and platforms would
rather not exist.
I'm Amaury Lesplingart, a developer who ended up doing adversarial research on state actors and platforms.
Who I am
& how I got here.
I started as a developer. Before any of this, I was building SaaS products, running small agencies, doing IT for a radio station. A bootstrapper by conviction, more interested in shipping things than pitching them.
The shift came by accident. During the first COVID lockdown, I ended up in a Discord server where journalists were debunking false claims in real time. I stayed, started contributing, and together we built Journalistes Solidaires: a live open newsroom where anyone could follow along. It wasn't a plan. It was a reflex, and eventually the seed of CheckFirst, a Finnish adversarial research company I co-founded.
What followed was five years of building things in the open, for the community. The common thread: if something is happening online and no one can see it clearly, build something that shows it. Recommendation audits run from real households across eleven countries. Propaganda networks tracked in real time, hourly. Investigations that ended up in European Commission proceedings. Policy frameworks now used in government briefings from Brussels to Berlin.
The approach has always been the same: build it in the open, publish the methodology, and make the findings impossible to deny.
From the
field.
All posts →
LLMs trained on poisoned corpora are the next frontier. What builders and researchers need to know to defend the information space before the next election cycle.
Most of what I build is designed to expose something uncomfortable. So this is a different kind of post. My father-in-law has a bande dessinée collection. A serious one. At some point the question became: which albums does he already have? I built something to answer it.
Things I
actually built.
From platform monitoring tools and propaganda trackers to OSINT training platforms and policy frameworks: a cross-section of years of adversarial research engineering.
2026
2025
Unveiled the extent to which Kremlin-linked entity SDA exploited Meta platforms to disseminate propaganda, revealing how Meta accepted Russian propaganda payments despite sanctions.
Real-time tracker of the Russian Pravda propaganda network: 3.7M articles monitored, updated hourly, from 28+ countries. Reverse-engineered the Pravda web API and released the full dataset publicly. Shows how Kremlin content is injected into Wikipedia, AI chatbots, and X.
Interactive OSINT training platform for analysts and researchers. Trainers create custom information manipulation scenarios or use professional templates. Participants work through realistic multi-platform simulations. No tracking, no passwords. Used by democratic institutions and civil society.
A shared framework for researchers, civil society, and regulators to categorise and report Digital Services Act infringements, creating a shared language and interoperable framework for the European ecosystem.
An interactive dashboard analysing Community Notes data from X. Tracks global distribution, fact-checking source usage, note visibility and responsiveness, author patterns, and AI-detected content trends across languages. Updated twice daily, with filters by language and timeframe.
Updated EU DisinfoLab’s Impact-Risk Index to reflect the latest advances in AI and coordinated inauthentic behaviour. I also built the accompanying automated Impact Calculator, freely available to the community, standardising how researchers assess the reach and severity of individual hoaxes.
2024
Exposed a pro-Russian campaign designed to exhaust fact-checkers: 800+ organisations targeted, 2,400+ tweets, 200+ emails. The campaign timed attacks to major events including the Paris Olympics and elections and generated 250+ fact-check articles amplifying the fake assets it had created.
Systematic audit of political ads on Meta evading moderation via character hiding and word obfuscation, reaching 3M+ accounts across Italy, France, Germany and Poland. Directly cited in EC DSA proceedings against Meta. Presented separately from Facebook Hustles, covering political information manipulation.
Commissioned by Mozilla, I audited the ad transparency frameworks of 11 major tech platforms (Google, Apple, Microsoft, Meta, TikTok, X, Pinterest, Snap…) ahead of the 2024 elections. Findings featured on CNBC. Named X as the worst offender.
A deep-dive into Amazon’s recommendation engine, uncovering content amplification mechanisms and user entrapment in dubious narratives. Presented at Infox sur Seine 2024, Paris.
Uncovered a coordinated network of Facebook pages linked to the far-right AUR party running 3,640 political ads, reaching an audience of 148 million, in systematic violation of Meta’s ad policies and Romanian electoral law. Cross-platform coordination also identified on TikTok and Google Ads. Cited by TechPolicy.Press, DSA Observatory, and Candid Technology.
2023
Month-long investigation exposing a large-scale scam operation on Facebook, with 1,500+ ads across a network of deceitful media sites, reaching 3M+ users in 12 European countries in just 3 weeks. Triggered European Commission formal proceedings against Meta under the DSA.
2022
A monitoring tool for political advertising around the 2022 French presidential elections on Meta platforms. Tracked candidates, parties, and campaign themes: who pays for the ads, who broadcasts them, which audiences and territories are targeted, and how much money is involved.
2021
Mini-computers deployed to real households across 11 European & African countries, tracking what Google, YouTube, Twitter, Reddit and Google News push algorithmically. Fully open methodology, so platforms can’t deny the data. Started in Belgium, now active across two continents.
Working at the
heart of Europe's
FIMI response.
The nature of this work means you end up in unusual rooms. Over five years I've co-authored reports with national agencies, supplied technical data to regulatory bodies, built tools that ended up in government briefings, and sat at the table with institutions shaping responses to foreign information manipulation.
None of that was planned. It followed from building things in the open and making them impossible to ignore.
Building a Common Operational Picture of FIMI
Together with EU DisinfoLab, VIGINUM, CASSINI, the German Federal Foreign Office (Auswärtiges Amt), the EEAS and DFRLab. We contributed to applying VIGINUM's Information Manipulation Set (IMS) framework to five major Russian operations: Doppelgänger, Media Brands/RRN, Undercut, Storm-1516, and Overload.
Authored work
& cited research.
They've written
about the work.
Featured interview on the Mozilla ad transparency audit of 11 major platforms. Named X as the worst offender. Published ahead of the 2024 global election cycle.
Read article →Bloomberg cited findings on coordinated Russian influence campaigns targeting European audiences, published in January 2025.
Read article →Politico EU covered the findings on coordinated pro-Russian advertising campaigns persisting on Meta platforms despite ongoing European Commission DSA proceedings.
Read article →One of Germany's most prestigious newspapers covered the Operation Overload research, exposing why Russian actors systematically target European fact-checkers to overwhelm and discredit them.
Read article →El País cited the research on Russian information manipulation operations targeting EU political processes, published in July 2025.
Read article →Harvard's Nieman Journalism Lab covered the Mozilla ad transparency audit, highlighting findings on the structural inadequacy of platform ad repositories for researchers.
Read article →Video interview for NZZ on how analysing medals awarded to members of Russian intelligence services reveals the hidden structures of the FSB, offering a rare look behind the facade.
Watch video →Le Figaro covered the research on how polarised debates, fake polls, and chatbots are being used by pro-Russian operations to target France as their primary European audience.
Read article →