Associate Researcher with Pr. Hany Farid (current position)
Research, Teaching and Press Coverage
Limits and geopolitics of algorithmic content moderation during the COVID pandemic, an essay published with the Brookings Institution
An analysis of YouTube's promotion of conspiratorial content.
Paper released with in-depth coverage from the New York Times.
Interviews featured in le Monde and the BBC.
We monitored 8 Million Watch Next recommendations seeding from YouTube's most popular informational channels over 15 months, and developed an algorithm to rate for each video the likelihood that it is a conspiracy theory. This research offers an audit of YouTube's claims to reduce the spread of conspiratorial content, and provides transparency on what the company decides to demote and what it considers legitimate.
Uncovering physiognomic filter-bubbles on TikTok.
Featured in BuzzFeed, Vox and Wired UK.
This anecdotal evidence was not backed by research but raises nonetheless important societal questions about algorithmic bias and the risks of engagement-driven social media recommendations.
Algorithmic Fairness and Opacity Group member and fellow.
AFOG is a weekly research exchange between Berkeley academics and Google's Trust and Safety team. For the past 2 years, we developed new ideas, research directions, and policy recommendations around issues of fairness, transparency, interpretability, and accountability in algorithmic systems.
Instructor for Applied Behavioral Economics for Information System (Guest Lecturer and GSI), taught by Professor Steve Weber.
The class explores systematic cognitive biases, how they impact the way we process information and influence the use and design of technology.
Master thesis: a field-review of Sniper Ad Targeting. We investigate whether and how an ad can be tailored and sent to a single, specific individual. We analyse the risks of this malign practice on our society, as well as the legal frames around it. Advised by Professor and Dean Deirdre Mulligan. Introductory video here.
In depth-analysis, of a Pretrial Risk-Assessment Tool. These algorithmic systems are becoming ubiquitous in the U.S to determine whether a defendant should be placed in detention before their trial. We analysed the most commonly used of these tools (the PSA), and pointed out serious shortcomings in its algorithmic and socio-technical design. We build off these insights to suggest an alternative system design. Research presented at the Information Ethics Roundtable at Northeastern University.
Cybersecurity consulting with a civil-rights NGO from central-America, to help their organisation anticipate and defend against cyber-threats and state-surveillance.
Investigation of a digital hate speech campaign in the Central African Republic with the Human Right Center.
UC Berkeley School of Information: Master of Science
Transdisciplinary perspectives of societal, legal and ethical consequences of technology on society. (MIMS program)
Télécom Paris: Engineering degree (undergrad + MS)
Two years of intensive preparatory classes in mathematics and physics for Télécom's competitive admission exam. The Télécom degree provides a broad technical understanding of computational systems and networks. Double-degree master with Eurecom with a specialisation in Data Science.
Experience in Industry
Algorithm Designer @Bloom (2018): Design an influence ranking model for social media posts.
Algorithm Designer @Jalgos (2016): Design and Implementation of data-intensive algorithmic solutions for Fortune 500 companies.
Freelance Developper (2013-17): Several projects, involving discrete optimisation, resource allocation and web semantics.