This article was first published on PetaPixel, April 12, 2026:

This March, Spanish conceptual artist and photographer Joan Fontcuberta published a new book in Italy. Immagini Latenti concludes with a chapter on AI and photography, referencing the debates surrounding Boris Eldagsen’s submission of an AI-generated image to the Sony World Photography Awards in 2023 and Miles Astray’s submission of a photograph in the AI category of the 1839 Award in 2024.

When we received our copies, we were struck by the context in which our images appeared. Fontcuberta frames AI-generated images as “Second-Generation Photography” and proposes the term “Algorithmic Photography”.

Over the past 2.5 years, we have repeatedly encountered similar lines of reasoning. They are not only logically inconsistent, but they are also unhelpful, both for photography and for democratic societies. For that reason, we find it necessary to respond collectively to what we consider a rudimentary theory. Below, we contrast excerpts from the book (used with permission) with our own perspective. The original Italian text has been translated into English using multiple AI tools (ChatGPT, Gemini, DeepL).

Book cover for
The excerpts by Joan Fontcuberta are from his new book Immagini Latenti.
 

I. The Naming Problem

Joan Fontcuberta:

When I got married, some friends gave me a lemon tree […] We planted it and it grew happily. […] after twenty-five years […] the lemon tree began to produce oranges. […] A friend who is an expert in citrus fruits […] gave me a plausible explanation, […] our lemon tree had almost certainly been grafted onto a branch of an orange tree, and over time it began to reveal its true hybrid nature—non-binary and ambivalent.

Personally, I preferred to keep thinking that the tree had found the courage to come out of the closet. All the more so because it seemed to me a magnificent metaphor for what is happening to photography today, which is also going through a phase in which it is about to come out.

Let me explain. For two centuries, we have attributed to photography a descriptive accuracy of reality that guaranteed absolute documentary fidelity. Now, however, algorithmic photography is blending with optical photography, and we no longer know which way to turn.

Immediately, we encounter a semantic and terminological problem. There are photographic images produced by cameras and photo-optical recording systems. And there are others—apparently photographic—produced through generative AI visualization systems.

The former are children of chemistry and light; the latter of computing and darkness. We must therefore begin to decide whether both types of image should be considered photographic.

If we focus on the processes involved, it is obvious that they are different kinds of images. Yet the difficulty of finding a word capable of classifying photorealistic representations of algorithmic origin weakens the decisiveness of that answer.

These are images without a real referent—what we might call nemotypes.

Some have proposed the term promptography, because such images originate from a prompt—that is, natural-language instructions given to a system in order to obtain the desired photographic result.

There have been other attempts, such as syntography, but none have prevailed.

When photography was shaken by the arrival of digital technology, it became necessary to specify that there had been a previous form to which a distinguishing adjective was now added: we had analog photography—or photochemical photography—versus digital photography. At that time, there was no need to invent or assign a specific new name, and nothing disastrous happened. Therefore, we could probably proceed in the same way now and still understand one another perfectly.

Boris Eldagsen:

Fontcuberta clearly recognizes the distinction between camera-made and AI-generated images at the level of process – but then argues that this distinction ultimately does not matter.

The problem is: it matters. Considerably.

A photograph is made by light bouncing off a real thing and hitting a sensor. An AI image is made by a computer calculating what a plausible image would look like, based on patterns learned from millions of prior examples. The outputs may appear identical on screen, but they emerge from fundamentally different processes. And it is precisely this process that grants photography its authority as evidence.

Calling AI images “Algorithmic Photography” treats this as a minor upgrade: a lemon tree simply producing oranges. But even in Fontcuberta’s own metaphor, a lemon is still a lemon and an orange is still an orange. Grafting doesn’t change what a fruit is. Two entirely different kinds of image are being given the same name, and that confusion has real consequences.

By this logic, a photorealistic painting would become “Acrylic Photography”. But we still call it painting, because the process matters, and it has been created with canvas, brushes, and paint.

Arguing that the lack of an adequate term for “photorealistic representations of algorithmic origin” justifies subsuming AI images under photography is weak. On the one hand, naming a new medium takes time. On the other hand, Fontcuberta remains confined within photographic thinking and fails to recognize what this new medium actually is: LATENT SPACE.

It consists of the training data of an AI model in which all media is encoded as vectors. In Latent Space, different art forms are no longer separate materials. They become different projections of the same underlying structure. A melody can morph into an image. A text description can generate a video. A sketch can become a sculpture. Latent space is a meta-medium.

This is why prompts have become multimodal. The prompt is a control interface to latent space, navigating probability.

And that is precisely why I suggested the term “promptography”. It encompasses everything produced with a prompt: text, sound, video – not just images resembling photography, but also those resembling drawing or painting.

Because Fontcuberta limits his analysis to “photorealistic representations”, he reduces the discussion to a narrow subset of outputs—and consequently struggles with the arguments that follow.

Miles Astray:

Read Miles’ response on his webpage (link will follow soon).

Two women pose together in a vintage-style, black-and-white photo. One woman stands behind, gently resting her hands on the other's shoulders and leaning in close, creating an intimate, mysterious atmosphere with light streaks and a hazy effect.
“PSEUDOMNESIA | The Electrician“, promptography, 2022 by Boris Eldagsen. This AI-generated image won a top prize at a prominent photography competition in 2023.

II. The DNA Problem

Joan Fontcuberta:

[…] But the debate goes deeper: are we dealing with images belonging to different classes, or simply photographs of different rank?

[…] It is easy to imagine that everyone dreamed of inventing a technique capable of producing faithful representations independent of human skill—as if nature could represent itself without the mediation of pencil or brush.

The camera eventually fulfilled that role, producing rigorous and detailed visual records. Since then, billions of photographs have been produced, and these images now constitute the very material used to train generative neural networks.

In fact, AI functions like an ogre forced to devour enormous quantities of images in order to produce plausible results.

Thus, algorithmic photographic images, although derived from the visual heritage of the entire history of photography, carry an undeniable photographic DNA. For this reason, they could reasonably be considered second-generation photographs.

Roland Barthes once wrote that every photograph awaits a text. Now the situation is reversed: it is the text that generates the photograph.

Boris Eldagsen:

Fontcuberta’s “Barthes reversal” is rhetorically appealing but conceptually shallow. In Camera Lucida, Roland Barthes argues that photographs are unstable without language. The caption stabilizes the photograph. The same photo will change its meaning with different captions.

But Fontcuberta overlooks a crucial development: prompts are not captions. They are instructions to a probabilistic system. Moreover, it is no longer simply “text” generating images. Multimodal prompting has been standard for years. Any input modality can generate any output modality within latent space. What collapses here is media categories.

The “Second-Generation Photography” argument is elegant, but it rests on a logical error. AI models are trained on millions of photographs: that’s true. But that doesn’t make their outputs photography. What the model inherits is visual style, a set of statistical patterns. It does not inherit what defines photography: a direct physical relationship between light, a real event, and a sensor.

Miles Astray:

Read Miles’ response on his webpage (link will follow soon).

A pink flamingo stands on white sand with its head hidden from view, near the edge of calm, light blue ocean water.
“FLAMINGONE”, photography, 2023 by Miles Astray. This photo was disqualified from an AI image contest in 2024 after winning the top prize.

III. The Validation Problem

Joan Fontcuberta:

This terminological issue—behind which lies a deeper ontological question—came to the attention of the media when the work The Electrician, belonging to the series Pseudomnesia by the German photographer Boris Eldagsen, won the Sony World Photography Award 2023 in the “Creative” category. […]

The Canadian photographer Miles Astray, specializing in nature and travel photography, reversed the logic of Eldagsen’s action: he submitted a real photograph to the newly created AI-image category of another important competition, the Color Photography Awards.[…]

Indeed, both cases highlight an uncomfortable but unavoidable reality: the dividing line between human creation and that generated by artificial intelligence is rapidly fading, if it has not already disappeared entirely. […] Their intention was to reveal the unreliability of validation systems in competitions of this kind.

These may have been minor infractions, but they pointed toward a much more crucial issue: determining the status and labeling of images, their lineage, their pedigree.

Both initiatives might appear as provocations, but in reality, they offered a necessary critique: if a photograph taken with a camera can be mistaken for an image generated by a machine – or vice versa – then we must rethink how we define the boundaries between images, and also concepts of authorship, creativity, and visual truth. Rather than making us victims of deception, these gestures provide a useful conceptual shock.

Boris Eldagsen:

What these two incidents actually exposed is that the institutions evaluating the images had no coherent framework for telling them apart.

If these cases teach us anything, it is this: the credibility of an image can no longer reside in the image itself. It must reside in the process—who made it, how, and under what conditions of accountability. Documentary authority does not disappear; it migrates. It becomes procedural.

This is precisely why Fontcuberta’s dismissal of process is problematic.

Miles Astray:

Read Miles’ response on his webpage (link will follow soon).

Miles Astray (left) and Boris Eldagsen (right) with their images at the exhibition “RIVALS – Photography vs. Promptography”, European Month of Photography Berlin, Gallery Guelman und Unbekannt, March 2025. Portrait by Grigoryev / Guelman und Unbekannt Gallery.

IV. The Doubt Problem

Joan Fontcuberta:

Despite everything, the fundamental issue that troubles both specialists and the public concerns the credibility of images.

Some wonder whether a prompt-generated photograph will one day win the World Press Photo award. But perhaps the question is wrongly framed.

What should really be questioned is whether competitions like the World Press Photo still make sense.

We now live in a visual regime in which images increasingly construct the world rather than simply represent it.

[…] Perhaps we should even be grateful for their proliferation, because they remind us of the necessity of doubt.

Algorithmic photography reinforces the idea that every image is, inevitably, an illusion and forces us to reconsider the trust we place in images.

[…] Photography, therefore, has never truly been objective; we simply chose to believe that it was.

Today, with AI acting as a new demiurge, documentary photography quietly slips between historical narrative and fabricated illustration.

Deepfake technologies have opened Pandora’s box of iconography: thousands of hyperreal scenes and faces created from nothing flood our screens.

We no longer look in order to understand—we look in order to doubt.

[…] Every technology of vision has reshaped how we perceive the world.

What we are witnessing today is the transition from optical realism to informational realism—a synthetic realism summoned by commands, texts, and strings of code.

From Greek realism, to Renaissance perspective, to Enlightenment aspirations for accuracy, we have suddenly arrived at a condensed synthesis of all these visual regimes.

And now a single prompt can generate an image that might once have required centuries of technological evolution.

Boris Eldagsen:

The claim that “every image has always been a fiction” is only half true—and half-truths are dangerous in public discourse.

Every photograph is framed, selected, edited – that’s undeniable. But a camera photograph still begins with something real: light from an actual event, recorded by a sensor. A generated image begins with statistical inference across a database of prior images. These are not the same act.

Treating them as equivalent doesn’t sharpen our critical thinking. Eliminating institutions like World Press Photo does not solve the problem either. The real task is to defend accountability: where an image comes from, who produced it, and under what conditions.

Trust is shifting—from the image to the process. Provenance, metadata, editorial chains of custody, and transparent sourcing become central. The image is no longer proof. The process is.

What is striking is that Fontcuberta does not address the democratic implications of this shift in this chapter. Public discourse depends on visual evidence. When all images become equally suspect, societies lose a crucial epistemic tool.

Doubt, in moderation, is productive. In excess, it becomes disorienting – and disorientation is easily exploited.

If any image can simulate evidence of events that never occurred, those who benefit most are those least deserving of trust. Blurring the distinction between photographic capture and synthetic generation does not liberate us from naivety. It provides cover for manipulation.

When visual evidence becomes a category of general suspicion, the burden of proof shifts in ways that favour those in power and disadvantage those trying to hold them accountable.

The answer is not to celebrate doubt as an end in itself. The answer is to construct new distinctions: between capture and synthesis, between enhancement and invention, between evidence and illustration—and to build institutions capable of maintaining those distinctions.

Miles Astray:

Read Miles’ response on his webpage (link will follow soon).

About the authors: Boris Eldagsen is a Berlin-based photo & video artist, investigating the unconscious mind. In search of the timeless, his visual poetry unites the sublime and the uncanny. You can find more of his work on his websiteFacebookYouTube, and Instagram.

Miles Astray is an activist artist blending writing and photography inspired by slow travel. You can find more of his work on his website and Facebook.


Gregory Chatonsky Replying to Eldagsen & Astray

LINK

The arguments made by Eldagsen and Astray against Fontcuberta’s hypotheses seem to rest on an ontological conservatism that misses the ongoing epistemological revolution. By clinging to a binary distinction between “light” (photography) and “code” (AI), they commit a fundamental category error. They perceive generative AI as an informatics of instruction—software executing a calculation according to rules pre-established by a model—whereas we have shifted to an informatics of vector navigation. Fontcuberta understood what many still refuse to see: the image has never guaranteed authenticity. By proposing a wisdom of doubt rather than restoring certainties, he opened the door toward a post-photographic epistemology. Yet this opening remains to be pursued: it recognized undecidability without drawing out its full political and ontological consequences. This text extends that fundamental intuition by exploring what is at stake beyond generalized doubt.

Metabolization
The Web has triggered an unprecedented media inflation. This saturation has transformed the status of photography. It is no longer an isolated act of capture, haloed by singular value. It has become a surplus resource, one datum among billions that humans can no longer perceive in its integrity. This is the condition of hypermnesia: remembering becomes impossible because there is too much to retain. Thirty years of the Web represents thirty years of the silent accumulation of images in databases—images tagged without consent, metadata piled in invisible strata.

It is precisely within this context of intractable saturation that AI appears. But it does not appear as a threat external to institutions; it is the symptom of their obsolescence. Institutions never truly had the power to master this flow. They believed they were organizing scarcity. The Web revealed that there was no longer any scarcity to organize. AI absorbs this deluge by transforming it into a continuous multidimensional topography: latent space. It digests this massive flow according to a logic very different from copying or simulation. It extracts symbolic forms—not some “truth,” but statistical correlations that reconstruct the world as it has been represented by billions of individuals in their daily practices of sharing images.

This is a chemical process in the strict sense: AI does not copy reality; it fractionates it and makes it navigable according to a logic that escapes traditional categories of the discrete. This is precisely what we observe when working with these latent spaces: how images cease to be finished objects and become transit points in a continuum. When we ask a diffusion network to transform one image into another through interpolation, we witness a morphing that has no photographic equivalent. It is not a fusion of images. It is a traversal of the possible within a geometry that eludes us. And in this traversal, categories collapse: we no longer know if we are creating or discovering, inventing or awakening what lay dormant within vector coordinates.

This process is radically different from what Eldagsen and Astray describe. They imagine they can preserve the photography/AI distinction by strengthening institutions and tracing processes. But this fails to recognize that institutions have always been structurally incapable of mastering this flow. They merely organized its invisibility. The Web made it visible. AI is the direct consequence, not an accident to be corrected. Metabolization means that AI is not an intruder. It is the reaction of a technical system to a sensory saturation that has become intractable by old logics. To refuse to see this is to cling to the fiction of an epistemic scarcity that has already evaporated.

From Code to Vectors
Eldagsen and Astray implicitly adopt the distinction between two conceptions of AI: either an AI that executes explicitly programmed instructions (code as recipe), or an AI that emerges from learning (code as hidden logic).

But this opposition itself is obsolete. What has actually occurred is a shift from an informatics of instruction to an informatics of vector navigation. This shift is not a technical refinement; it is a logical rupture.

Classical informatics of code consists of a series of instructions written by humans, executed deterministically, producing a predictable result. This is computational logocentrism: the belief that code is transparent—that it can be written, read, modified, and mastered. Eldagsen and Astray remain prisoners of this conception, even when they admit to AI’s opacity. For they still expect traceability to be possible—that someone could, in principle, understand the process. This is a pre-computational belief: that everything can be made visible through intellectual effort.

But generative AI does not function this way. It does not manufacture an image according to pre-established rules. It locates an image in a multidimensional space of probabilities whose dimensions emerged from the learning process without prescriptive human intervention. The true code—the architecture of the neural network—is merely the tooling to create the conditions for navigation. What matters is not the programmed logic. It is the probabilistic topology that emerges from the process, autonomous and irreversible. Once the network is trained, it cannot be “unrolled” like a film. The parameters are fixed, but their interactions remain inaccessible to linear reading.

They insist: “Photography is written with light; AI imagery is written with code.”

This is a seductive formula, but a false one. Neither is “written.” Writing implies a linear intention, a traceability of the gesture. Photography is an optical capture—simultaneously passive and active—subject to the physical presence of the real. Generative AI is not written: it is vector navigation, a learning of the fold of latent space. These are not two variants of the same act of composition. They are two radically different ontologies, two regimes of meaning.

When we generate an image, we are not calculating anything in the classical sense. We are traversing a continuous latent space along trajectories that were not mapped out in advance. The image does not result from a formula. The image emerges as the actualization of a possibility that existed as vector potentiality, immediately. This is the abyssal difference between a discrete logic (code: 0 or 1) and a continuous topology (latent space: an infinity of gradations). And this difference changes everything—it changes not only how we produce images, but how they produce us in return.

Photography is a medium of the discreet: a click, an instant, a single viewpoint, an immobilization. It captures according to a binary logic: this moment existed; this event took place or did not. Latent space functions according to a different logic: it is a continuous field of forces where one can glide from one concept to another without rupture. There is no stopping point, no “moment of capture.” There is a continuum of possibilities.

This is why they cannot be compared by saying they are just two different means of achieving the same result. They proceed from incompatible ontologies. In photography, there is a rupture between what was captured and what was not. In latent space, there are only degrees of probability, topological proximities, and seamless continuities. This is also why the Eldagsen/Fontcuberta opposition is false: they are arguing over what name to give something, when the problem is not the name. It is that the very ontology of the “visual” has changed.

The Inversion of the Graft
Fontcuberta rightly uses the metaphor of the graft: AI grafts itself onto photography, transforming it and changing its nature from within. Eldagsen and Astray implicitly accept this causal direction, as if AI were a disturbance coming from the outside, infecting a pre-existing system.

But we can not completely reverse this argument: it is not AI that grafts itself onto photography. It is photography that has become an anachronistic graft of the AI ​​system. The logical chronology is reversed.

Traditional photography, frozen in its optical capture and its presuppositions of authenticity, is now merely a mode of input injected into a system that radically exceeds it. It certainly provides the initial coordinates—the training data—but it is the latent space that deploys its metamorphic potential. It is no longer the producer of meaning; it is the digested material.

The photographic image thus becomes one archive among others, a trace in the vectorized memory of AI. Its status does not change gradually; it collapses categorically. It shifts from “proof of a captured reality” to a “starting point for the generation of possibilities.” It is absorbed, metabolized, and recombined according to a logic that has no common measure with the photographic process. And in this absorption, something of its essence escapes—or rather, it discovers that it never had an essence, only forms.

The photographic accident—subject to the hazards of the physical real, the raw contingency of the moment, to that which refuses to be seized—is now replaced by the vector accident: an unpredictable drift in the multidimensional curvature of data that reveals visual truths nestled in the interstices of our collective memory. This vector accident cannot be predetermined. Nor can it be mastered. It emerges from the navigation itself, as an encounter between the navigator’s intention and the unknowable topology of the space. It is an accident that is only an accident for us, not for the system that generates it; for the system, it is simply the actualization of a virtuality contained within its structure.

Thus, Eldagsen was right to refuse the Sony award, but for the wrong reasons stated publicly. He should not say, “AI steals the prize from photography by mimicking it better than itself.” He should say: “Photography no longer exists as an autonomous ontological category. It is a graft of AI. And I refuse this prize because accepting it would mean admitting that I still believe in a distinction that the technical system has already made impossible.”

Only on this condition would his gesture of humility be honest.

The Era of Generalized Suspicion
Eldagsen and Astray see the generalized doubt toward images as a crisis to be resolved. Astray worries: “If all doubts paralyze us, those in power win.”

But this is an misunderstanding of what is happening. Generalized suspicion is not a crisis. It is an inevitable clarification. Since AI has metabolized the photographic aesthetic to the point of making it indiscernible from optical reality, trust in the image collapses. But this collapse does not mean we have lost the truth. It means we are finally discovering a truth we were hiding: the image has never been proof. It has always been an interpretative battlefield.

This disturbance manifests as a visible double crisis:

On one side, synthetic images insert themselves into the social field by passing themselves off as captures of the real.

On the other, authentic photographs are contested, victims of a collective paranoia that mistakes them for algorithmic generations.

But what Eldagsen and Astray interpret as the collapse of distinctions, I see as the revelation that distinctions only ever existed as institutional fictions. Recent controversies in art competitions are merely the visible symptoms of this clarification. They are not accidents. They are the exposure of what had always been hidden: that the image is never proof, never a guarantee of authenticity.

Institutions believed they mastered this authenticity. They simply mastered a consensus. And this consensus is now collapsing because latent space has shown there was never anything to master—only probabilities to navigate. To refuse this suspicion by calling for the strengthening of institutions is to refuse to see that institutions are precisely what collapsed suspicion through power, not through clarity.

Eldagsen and Astray ask the wrong question. They ask: “How to distinguish? How to preserve? How to restore trust?”

The real question is: “Who controls the latent space? Who has the power to parameterize alignments, to choose datasets, to decide which visual possibilities will be generatable and which will remain unthinkable?”

This is a political question. Not technical, not institutional—political. It directly engages the very possibility of what an image can express, what it can show, and what it will never show.

For centuries, photography seemed to guarantee a certain democracy of representation: anyone could, in theory, take a photo, publish it, and challenge dominant images. But this was a productive illusion. The power to control the image had moved to institutions: publishers, museums, press agencies. At least one could criticize them, occupy their spaces, and contest their selections. We knew where the power resided.

Now, this power has volatilized and reconcentrated at a more fundamental level: the control of latent space itself. A tiny number of technological corporations absolutely control the datasets, the algorithms, the learning parameters, and the final alignment of the models. They do not control a collection of images. They control the ontological conditions of possibility for what an image can be.

And this mastery is structurally invisible. When Meta or OpenAI decides that a certain representation will be “aligned” and another not, we are no longer debating at the level of images. We are debating at the level of vectors—a domain where only the engineers of these corporations can navigate. The latent space of commercial platforms is closed. Datasets are proprietary. Alignment is secret. And yet, billions of individuals dream through these latent spaces, believing they communicate through their generated images, unaware that they are only actualizing the possibilities that a few algorithms have decided to be thinkable.

Calling to strengthen institutions in the face of this problem is like calling to strengthen the coast guard against a rising tide. The problem is not a failing distinction between photo and AI. The problem is that the mastery of the collective imagination has concentrated in the hands of proprietary algorithms, and this concentration has become invisible precisely because it no longer works at the level of visible images, but at the level of vector possibilities.

Toward Multiplicities
Eldagsen and Astray defend an epistemic order that has already collapsed. Fontcuberta proposes accepting undecidability and cultivating a wisdom of doubt. Even if Fontcuberta’s approach seems more accurate, both positions perhaps miss the true stake—not because they are false, but because they are politically insufficient.

What is needed is not to restore the photo/AI distinction. Nor is it to passively accept doubt as a final horizon. It is to accept the collapse as a political condition to invent other practices—artistic, pedagogical, political—that multiply latent spaces so that none can dominate.

For as long as a single latent space controls the majority of visual generations, as long as Meta, OpenAI, and a few others alone decide the conditions of the visible because they possess the computing power and have appropriated the means of production for profit, we have not solved the political problem. We have only moved it from the level of institutions to the level of vectors. True liberation would be for visual possibilities to fragment radically, for incompatible latent spaces to develop in parallel, so that no one can impose a single grammar of the visible. Not out of nostalgia for lost creative autonomy—that nostalgia is also a trap—but out of tactical necessity: the plurality of latent spaces is the only guarantee against the totalization of meaning. This can only be achieved through the collective appropriation of computing power.

The flaws of older generations of AI—their hesitations, their monstrosities, their undomesticated strangeness—constituted precisely their aesthetic and political virtue. These flaws were fissures where the unpredictable found a place. Their gradual disappearance in favor of a standardized, polished, invisible realism is not technical progress. It is the programmed homogenization of our collective imaginary, the methodical closing of possibilities in favor of a statistical average that corporations find manageable and monetizable.

Against this reduction, we must cultivate the accident, the divergence, the defamiliarization. Not to restore a lost authenticity—that authenticity never existed—but to multiply the possibilities within the technical system itself that tends to reduce them. This work is not innocent. Nor is it totally free. But it is necessary. And it presupposes a certain form of humility: recognizing that we navigate latent space without mastering it, that we explore possibilities without presuming we create them, that we resist without the certainty of victory.

The only honesty today is to accept that there are no more distinctions to restore, only spaces to fragment, possibilities to multiply, and a silent war to be waged against vector homogenization. It is in fidelity to Fontcuberta’s intuition—but by radicalizing it—that we reach this conclusion: his generalized suspicion is not an end in itself, but the starting point for a political transformation.

Grégory Chatonsky (born 1971) is a French-Canadian artist and a pioneer of Net art and AI-driven creativity. Since the mid-1990s, his work has explored the relationship between technology, memory, and the “post-human” condition.

He is best known for creating immersive installations and digital works that use Artificial Intelligence to generate speculative futures, often depicting a world where machines continue to process human culture long after we are gone. In 2022, he published Internes, the first French novel co-written with a large language model. Chatonsky’s work is frequently exhibited at major institutions like the Palais de Tokyo and the Centre Pompidou.
 

Boris Eldagsen responding to Chatonsky