Fire dances across the human timeline, from the earliest hearths to last New Year’s fireworks. But in the eyes of some AI systems, the flames of celebration cannot be distinguished from the fires of oppression, a dancing ember is a dancing ember, pixel by pixel, data point by data point. This flattening of nuance is at the heart of Not All Sparks Are Equal, a new video mapping installation that explores how technological perception often strips away the vital context that gives human experience its meaning.
In Catalonia, fire has two dramatically different public faces. On one side stands the correfoc, a beloved tradition dating back to medieval times where dancers dressed as devils carry spinning, sparking fireworks through crowded streets. These “fire-runs” represent a collective reclamation of public space through joyful cultural celebration, embodying community strength and Catalan identity. On the other side, stand police charges against protesters, where tear gas canisters, flash grenades, and rubber bullets transform public squares into zones of state-enforced order that tend to end up with garbage containers in flames. Here, fire becomes not a symbol of communal joy but a tool of control and suppression, and in the last 5 years Catalonia has seen a lot of examples of this use of force from the government.
Through this project, we’re exploring how algorithms flatten complex realities into simplistic clean-cut binary categories, safe/dangerous, orderly/chaotic, permitted/prohibited.
A correfoc dancer and a protester fleeing tear gas may generate identical heat signatures and movement patterns to an algorithm, but the gulf between these experiences could not be wider, one celebrates cultural identity while the other fights for rights. The deeper threat here isn’t just technical misclassification, it’s cultural submission disguised as protection. As correfocs once danced defiantly against Franco’s bans, we must now refuse their being domesticated by algorithms. This time, it is not against laws that one has to resist, but against software – silent flags of permission formulated in the bureaucratised language of safety.
To modern surveillance algorithms, profoundly different scenarios register as identical: chaotic crowds, heat signatures, unpredictable movements, and bright flashes against the night. The algorithm, blind to context, sees only potential disruption – a threat to be managed. This algorithmic blindness isn’t accidental but structural. Developed within Silicon Valley’s monocultural framework and trained on Western datasets, these systems impose a binary worldview that sees order and chaos as opposing forces rather than interconnected elements of social expression. Trained on datasets that prioritise simple perspectives and binary categorisations, they may work well on targeting factory optimisation but terribly at grasping these nuances. The projection installation highlights this by simulating algorithmic readings over authentic footage: “ERROR: 98% match — Riot detected (confidence: 0.92)” flashes over scenes of correfoc participants, while identical confidence scores appear over footage of police charges.
Project Page | Domestic Data Streamers | Instagram



