ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)
ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)ELEPHANT IN THE ROOM (2021)
Detail, Elephant In the Room (2021)

Detail, Elephant In the Room (2021)

Elephant in the Room (2021) is an ongoing research project that explores the transaction made between art-consumer and artist through the act of witholding or censoring content.
The art-viewer has come to expect gratification in a media saturated world. Upon entering the gallery space there is an implicit anticipation to be entertained, provoked into thought, or conceptually stimulated by the works inside. Dena Yago (2018) argues that the emerging Content Industrial Complex positions artists as entertainers or content-creators, who enter the flow of mass media consumption (2018).  Yago further identifies a wide-spread shift in contemporary art as converting the exhibition into a “content-farm” used for harvesting user-generated-content such as selfies and hashtags.
This shift has seen an expectation in audiences toward spectacular, immersive and interactive exhibitions that will function well as a platform for exhibiting the user’s own cultural value. When this expectation is left unsatisfied, the audience is encouraged to respond to their ideas surrounding interactivity and their engagement with art as “content”.  
As artificial intelligence continues to weave itself into the fabric of everyday life through social media platforms, media tech corporations and “smart” home technologies, questions on the distinction between human and computer become more important. Much of the development and deployment of AI is done behind the scenes, with the end user receiving a product: a google search that autocompletes for you, or an AI powered advertising algorithm that is trained to pre-empt desires and create a “tailored” experience. These methods work to create an experience that shields the user from unwanted content by analysing their behaviour online.  This work inhabits the hazy areas between human and computer, creating a conceptual conversation between the two actors.
The “content” of the piece is largely unviewable by its audience, instead becoming a unheard conversation between computing devices on a network. These streams of data occur with or without user intervention: replicating remote databases storing minute personal details distributed throughout the world.
References
Dena Yago, Content Industrial Complex (2018) [PDF]

Elephant In the Room (2021)

TAGGER (2020)TAGGER (2020)TAGGER (2020)TAGGER (2020)TAGGER (2020)
TAGGER (2020)TAGGER (2020)TAGGER (2020)TAGGER (2020)TAGGER (2020)
Blue Sign (2021)

Blue Sign (2021)

Tagger (2020) is a series of augmented photographic works that uses image captioning algorithm to overlay bounding boxes and computer generated tags onto images.

Intimacy and Algorithms (2021)

THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)
THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)THIS MASK DOES NOT EXIST (2020)
generated image: seed 00001

generated image: seed 00001

This Mask Does Not Exist (2020) is an online work which displays images generated by a Generative Adversarial Network
This network is trained to create images of people wearing face masks, after mask wearing was made mandatory in Melbourne following huge spikes in COVID-19 transmission.
In this work I trained an algorithm with a large dataset of mask images. After training the GAN can then begin to make inferences about their style and features. Holding the sum of these images in it's memory, the network can map these features to synthesize new content. AI and machine learning are today mainly used in corporate and government applications. Advertisers use AI to figure out what you might buy next. Facebook and Google use AI to try and pre-empt and automate human behaviour towards the goals of their platforms.
My use of these algorithms attempts to demystify and uncover the flaws in these approaches. In this work I use these systems in a scientifically unsound way, seeing what will happen when the algorithm is fed with dirty data, or undertrained.
EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)
EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)EXTRACTION (2019)
Extraction (2019), Detail

Extraction (2019), Detail

Extraction (2019) uses grid-based mediation of video to analyse and restructure networks of data mediation inherent in digital life. The increasing fragmentation of social communication blurs the distinction between real and virtual.
Installed @ Killing Time: Exchanging Utopia and the Pleasure Principle, Capitol Theater Melbourne 01.11.19
Statement
The content is subordinated by the gesture in a mediated reality.
Pixel values represent behaviour on an eight-bit scale. It’s a shell, a mindless exhibition of wants ⁣
You’re a clump of flesh exhibiting desires on a scale of one to ten ⁣
Where ten is the most profitable ⁣
And one is the least
⁣And there is nothing in between that interests you
⁣Life dominated by a dopamine rush
⁣Flooding neurons and computing networks
Echoes resonate in your subconscious⁣ ⁣ ⁣
#ennui ⁣
#isolation
⁣#progress
⁣#corporate_animism
⁣#throwbackthursday ⁣

Extraction (2019)

SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)
SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)SYNTHETIC PERCEPTION (2019)
Synthetic Perception (2019), Installation View

Synthetic Perception (2019), Installation View

Synthetic Perception (2019-) is an ongoing research project into the behavioural effects of digital surveillance culture on everyday social interactions. As a year long Honours project conducted in 2019. I experimented with using computer vision algorithms to fragment and deidentify video data. This data was then recomposed and output onto a display medium. These included computer screens, televisions and smartphones.
I attempted to make an algorithm that was content agnostic and extendable. This means it would not (contrary to the implicit hierarchies of human perception) have a bias toward a certain kind of content seen within data, treating it natively in its own way.
I settled upon a recomposing on a grid to fracture the dataset by removing any spatial context from the elements that the algorithm picked. Essentially, the code would not care if it could not pick out any comprehensible details within the data that it processed. The video data processed have varied throughout the course of this project. I have used both self-generated and online videos chosen by web scraping and data-extracting algorithms. For a more comprehensive look at the research and practice which surrounds this project, feel free to read my Honours Thesis accessible here

Video: Synthetic Perception (2019)

Video: Networked Mediation (2019)

STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)
STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)STATE MACHINE (2018-)
STATE MACHINE (2018-)

STATE MACHINE (2018-)

State Machine (2018-) is a multidisciplinary exploration into artificial simulation, memory and perception. It contrasts the passivity of the machine with the sentimentality of the human experience.
At the beginning of 2018 I downloaded all of my Facebook data. A decades worth of messages, check-ins, statuses, photos, videos and assorted ephemera were all bundled and emailed to me as a huge 6GB document.
Raking through this digital archive of my online self, I began wondering what could be pieced together by the analysis and re-purposing of this data. Could I construct a believable simulation of myself, or at the very least a machine which could take this input, process it in a basic simulation of a cognitive system, and output it to create something new?
Following this prompt I have refined a number of outcomes which reflect on the contrasts and similarities between the simulated and "real" brain.Taking the form of online content, interactive installation and sound composition, State Machine is an ongoing project in which I attempt to see what can be recreated from all of this data.
FLUID (2018)FLUID (2018)FLUID (2018)FLUID (2018)FLUID (2018)
FLUID (2018)FLUID (2018)FLUID (2018)FLUID (2018)FLUID (2018)
Fluid (2018)

Fluid (2018)

Fluid (2018) is a set of performances and installations using ascultation as a source of sound.
Sounds are harvested through hacked medical tools. The heart, lungs, blood and intestinal tract are captured. These are then reamplified, reinterpreted and deconstructed to create a sonic simulation of being inside a body. However this body is not organic, it is a hybrid of "virtual" and "real" sounds.
After initial experiments at Meta: Royal Parade in 2017, I took this project on stage at the Tote in 2018, where I live mixed and improvised with the harvested sounds in space.
GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)
GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)GRACEFUL DEGRADATION (2017)
Detail: Graceful Degradation (2017)

Detail: Graceful Degradation (2017)

Graceful Degradation (2017) is a engineering term whereby a system is designed to retain its core functions even when degraded or partially destroyed. In this work, video and audio is transcoded 1000 times to induce a slow shift to a new visual and aural identity.

Graceful Degradation (2017)