Amay Kataria (b. 1990) graduated with a bachelors in Computer Engineering from Virginia Tech in 2012. After working for Microsoft Corp. in Seattle, WA for four years in Windows and Skype organizations, he began his MFA in Art & Technology Studies at the School of the Art Institute of Chicago in 2017. Amay works at the intersection of hardware and software to critique the pace and politics of techno-culture. His work takes the form of computer animations, interactive installations, or autonomous machines that exhibit behaviors. He bridges physical with the digital and develops systems that emerge complex behaviors, which demonstrate his concepts and ideas. In 2018, he was invited to Ethereal Summit in New York City as a visiting artist, held a new media residency at Art Center Nabi in Seoul, South Korea, and Art-A-Hack residency at ThoughtWorks in New York City. In 2019, he was awarded the Bajaj Art Scholarship for Excellence. He has received grants and honors in the name of Shapiro Research Center, Bernard Silverman, Byron M. & Helen S. Brumback, Gilbert L. and Lucille C. Seay, MR. and CP. Staley Memorial, Litton Industries, Benjamin F. Bock, and James Milton Beattie Jr. His work has been exhibited at Experimental Sound Studio (Chicago), Sullivan Galleries (Chicago), Art Center Nabi (South Korea), ThoughtWorks (New York City), and Kamalnayan Bajaj Art Gallery (Mumbai). He will be concluding his MFA in May 2019.
While working at Microsoft from 2013 - 2017, I spent 2.5 years in the Skype organization after working for Windows. I joined Skype4Life's development team led by Eric Traut, where I got to work on a myriad of projects ranging from user interface development, scalable backend services, artificial intelligence driven bots, and infrastructure/platform work. Our team had two primary goals. Firstly, it was to build a new platform on top of Facebook's React & React-Native technology called ReactXP, which would allow developers to write a highly performant application that could run on web and native platforms (iOS and Android) seamlessly under a single codebase. Additionally, we wanted to rebrand Skype and make it the first large-scale application written using ReactXP from scratch. During this time, I contributed to CoreUI, Messaging, and Fundamentals squads by architecting and developing features that are utilized by millions of Skype users today. Earlier in 2017, our team open-sourced ReactXP, which has built a strong community of developers around it. Subsequently, we released the newly developed Skype application in June, 2017 on the app store. My experience at Skype exposed me to industry practices and experienced software engineers, who taught me the significance of collaboration to deliver a high impact product like Skype. Here's a short video in my own words about my experience at Skype and the possibilities beyond.Close Project
EarthLens was developed with a team of data scientists, software developers, and product managers at Microsoft’s //oneweek hackathon in 2016. It is an augmented reality application using Microsoft’s HoloLens to help scientists, students, and enthusiasts to immersively uncover and predict the health of our planet.
I led the software development of this project in Unity and collaborated with 2 other developers to deliver a MVP from scratch. The scenario we prototyped was a table-top experience of Great Barrier Reef, where a user interactively experienced changes in the health of corals in past years. By creating machine learning models with 50 years of data for the reef’s health, data science interns created prediction equations for calcification and density of corals in the future. I worked on visual depiction of two primary scenes and their transitions, Hololens gestures, and integrating contributions from other developers, who developed the machine learning models for the coral scene. To make the experience more immersive, I added spatial sound effects for gestures and surround sound commentary on how to use the application.
After meeting Stephen and Tyler in Seattle at Startup weekend event with a music vertical in 2015, we decided to hack a PureData patch to create stutter mic effects in 48 hours. The pd patch ran on a laptop that was hooked to a mic through a sound controller. I contributed by developing the midi component for the patch, which communicated with a TouchOsc XY midipad on a mobile device to achieve portable interactivity. As soon as the user interacted with the midipad and spoke in the mic, the system got activated and recorded user’s voice sample. Once a sample was stored, user could modify the sample in real-time by scrolling in the X or Y axis on their phone. X-axis was the playback speed and Y-axis was the length of the sample. Magic mic is for people doing standups or hosting events to create stutter effects like “Che-che-che-check it out..” Magic Mic was the crowd’s most entertaining project. We also got offered to create this as a plugin for a local startup’s Android music application.
Later, I enhanced the patch to include incremental recording of wav files. This made it a handy sampler to collect, save, and modify ambient sounds using just my laptop. To follow up, I imported the samples into Ableton and created experimental tracks using them.
This work is inspired from Aristotle's Nicomachean Ethics philosophy. In this, Aristotle greatly explores what makes human beings happy and the reasons behind their well being. According to him, good and successful people possess distinct virtues, which one should learn to identify in themselves and honor in others. He identified 12 virtues; courage, temperance, liberality, magnificence, magnanimity, proper ambition/pride, patience/good temper, truthfulness, wittiness, friendliness, modesty, and righteous indignation.
This work revisits each virtue and expresses it with sound and visuals. The visual for each track is generated using an interactive tool I developed in processing and post-processed the outputs in Gimp. Each audio track binds a virtue, it's emotional presence, the visual art for it, and the gamut of vices, which the virtue lies between. Aristotle proposed that mastering a subset of these virtues can ensure well-being and happiness of humans to a great extent.
A tool developed in processing enables users to interactively create abstract art. Users can choose between four shapes (circle, square, rectangle, and line) and can change scale, stroke, rotate, opacity, and render rate parameters for these shapes by referring to its keyboard interface document. Compositions created in it were post-processed in Gimp to be finalized.
This project is an investigation of a popular computer science technique called Recursion to devise algorithms. Here, I developed a system that renders a specific circular pattern at each depth and branches off to create a similar pattern from each circle; thus, reproducing a mandala like pattern. These sacrilegous looking patterns are identified by the input angle given to the system that determines the number of circles at each depth. The parameters like depth and angle determine the shape of the mandala. These patterns were also laser cut on MDF (wood). Here is the code for this project.Close Project
During Fall 2017 at SAIC, I conducted a workshop on how to use Github for versioning creative coding projects. I used Daniel Shiffman's Github for Poets videos as a reference to prepare this workshop. We started by covering the difference between Git and Github and walked through the web interface to create repositories, make commits, create pull requests and observe user activity. Then we dove into the Terminal and covered Git commands and Git workflows to organize and version the projects.Close Project
How does it feel to be in control? Sonic Negotiations is a dialog between participants and the sound around them. It questions what does it mean to be in control and how does one share this control with other humans in an interactive sonic environment. A participant is presented with a visual clue on the screen about his/her presence along with a connection to other participants in the room. Every participant entering the room enables a unique sound effect like reverb, delay, pitch or distortion on the audio track. When the moving participants are detected in the room using Computer Vision, the distance between them maps to the intensity of these effects. The audio track for this piece is an excerpt from Alan Watts' philosophical discourse called "What Do You Desire?", which argues that the primordial thing that one desires if money was not an object is to be in control. This work intends to involve the participants in a state of negotiation of control with other participants. Below, a series of images show how the visual clues change with the number of participants in the room and a video documentation of this system at the artist's studio in SAIC. This system was developed using OpenFrameworks and here's the code for this project.Close Project
Tree of Life is an audio-visual performance of the unravelling of a tree over time. This work is inspired from philosophical parallels between the growth of a tree and the human body, where different stages of a tree's growth are represented by a visual and sonic landscape. In the initial stages of its growth, a tree receives nutrients like water and sunlight, just how a child grows and receives visual and audial stimulation from the environment. This information is responsible for forming a child's brain and influencing his/her growth. Once a child starts becoming an adult, he/she starts meeting various people and develops certain relationships. These relationships are like branches of a tree that bear fruits and flowers with time, just how human relationships flourish and reap new connections. Eventually, the tree completes its life and becomes one with the ground from where it grew. Similarly, human body completes its soujourn journey and perishes like it's destined to be.
This performance creates a unique soundscape by combining fabricated metal objects with technology, body gestures and audio-reactive visuals to expess this growth process from beginning to end. It wishes to explore the sonic virtuality that imagines an audio language to express the performer's stored experiences and memories of his upbringing with respect to the growth of a tree.
With the metal dishes acting as external tongues, the role of these body gestures is no different than the mind, which gives words to our expressions and uses vocal organs to produce sound. Here, the body gestures are converted into intellect, and metallic objects are converted into vocal organs that express this tree of life.
Flow of thoughts that led to the ideation of Tree of Life. The goal was to create an audio-visual system for self-expression.
The technology stack for this project utilized OpenFrameworks for development. The audio was collected using microphones from the metallic dishes and routed to Ableton. With a custom TouchOSC interface, OSC messages for audio and visual commands were sent to an OpenFrameworks program running on a laptop. Audio commands were processed and sent to Ableton as MIDI messages to control the sound. Visual commands were forwarded to another OpenFrameworks program as OSC messages running on a machine across the room, which drove the visuals on the screen. Here's the code for this project.
Parabolic Sound is an interactive sound installation that lets users manipulate sound using parabolic metal dishes and a custom TouchOSC interface. It started as an investigation to convert metal objects into capacitive touch sensors using a simple RC circuit principle and explore new gestural relationships of human skin conductivity with sound. In this system, the user can cycle through audio samples and select audio effects like delay, distortion, and pitch modulation to be manipulated through interaction with the dish. They can also choose to manipulate these effects on sin, square or triangle waves and mix them with the output of the audio samples through the TouchOSC interface. A part of this system was utilized in Tree of Life project to extract sounds from body gestures. It was also exhibited at the Creative Coding exhibition at SAIC in Fall 2017.Close Project
Breath foliage is an interactive art work that idealizes the symbiotic relationship between humans and nature through breath. In this installation, participants exhale into a carbon dioxide station, which collects the breath and converts it into branches of a tree. Through this, Breath Foliage questions the equilibrium between humans and nature, which is mediated by the quality of air in our surrounding. In the current age of anthropocene, how long will this equilibrium be maintained before we hit the tipping point? By collecting and storing the participant's breath into a growing foliage, this work intends to store this breath as a promise to reflect upon the quality of air around us.
Breath data was collected using a MQ135 gas sensor connected to an Arduino, which was calibrated to be sensistive to carbon dioxide in the room. The breath data was sent to another machine using OSC (open source control) signals running a processing sketch, which was responsible to grow the tree.
Bad breath is a sonic recreation of machines breathing alongside humans in our ecosystem. With the onslaught of industrial revolution, there has been an omnipresent sound wall created by these machines. R. Murray Schafer observed the sounds from air ducts, exhausts, and ventilation systems in modern-day architecture as bad breath of machines. This work is inspired by his writings.
Humans have a symbiotic relationship with machines that represent urbanism, development, convenience, and necessity in our lives. But how does their physical presence affect the natural environment? Is there an equilibrium between humans, environment, and machines or have we already tipped over that equilibrium? What sort of consequences await us? This 4-channel sound composition layers the field recordings done through alleyways in Chicago, of vents, exhausts, and heavy machines and arranges them to give a lifelike character with the sound of breath.
The form for this work is inspired by the shape of giant chimneys in factories and industries where the industrial smoke is excreted from. Speakers are installed underneath the metal pipes to use the resonance of these pipes and create a hyper-local sonic environment. This work intends to confront the audience with sonic essence of machines and reflect upon the state of our environment where we live today. It questions the contributions we must make to restore the equilibrium between humanity, its inventions and nature. This installation was exhibited at Experimental Sound Studio in Chicago as part of Waveform and has been selected for the Level Up Grant by Shapiro Research Center.
Generative Decay surfaces the dichotomy between the objective and subjective phenomenon in this world. It’s inspired by the theory that physical objects do not exist as things in themselves but only as perceptual phenomena or bundles of sensorial data situated in time and space. With the use of computer vision and computation algorithms, the artist paints a mental image of a familiar physical object like flower vase to reflect this duality in reality. This ethereal, constantly updating, and generative image represents the perception of a physical object from the artist’s mind by constantly shedding layers of appearances and unravelling the subjective nature of human consciousness.Close Project
Ontology of Cryptocollectibles is about the digital & reproductive nature of assets like Crypokitties and Cryptopunks. It questions the current economical fuel and psychological ownership behind collecting these assets (aka non-fungible digital tokens) in our society using Cryptocurrency. This work uses custom software and reproduction techniques to create another creative abstraction on top of these digital assets to create a post cryptocollectible art work. The visual imagery reveals the reproductive ontology of these assets by slowly removing subsections of an asset, layered with its own reproductive version and creating a never-ending loop of revealings over time. Today, we are amidst a Blockchain revolution, where provenance is being proven the pinnacle for digital art. With digital platforms like Cryptokitties and rare assets like Cryptopunks, we have identified an age-long human fetishism to collect artifacts and claim ownership through a digital token. This has created a wave of active developments and innovations in the space of Art and Blockchain, which looks promising for visual artists in this modern age of internet. All it could take is a digital token, securely stored in an e-wallet to own a rare piece of art.
This work was developed and exhibited as part of the Ethereal Summit in New York on May 11-12, 2018 where I was invited as a visiting artist to explore the realm of Art and Blockchain.
2° Window was developed at the 2018, Art-A-Hack residency at ThoughtWorks, New York City. Being part of the Climate Consciousness team, we investigated the psychological impact of the climate crisis on our society. This work displays the sliver of time within which humankind must restructure global energy infrastructure to avoid locking in a 2 degree global temperature rise, which is widely viewed as catastrophic. The clock countdown is based on the work done by Nick Evershed at The Guardian, who calculated the timeline for a data visualization, based on IPCC reports. The Guardian gave their blessing to this project, and even published the source code of their calculations as shown on the site, to support the team’s efforts. The team was led by Andrew McWilliams and I contributed in architecting the countdown clock in OpenFrameworks for world-wide time zones.Close Project
Groove Body was developed at the 2018 Art Center Nabi residency in Seoul, South Korea. It's a virtual body that embodies all crypto-capitalists in a simulacra, who are hell-bent on tokenizing this society. Visually, this body is made up of Groove coins. This golden veil in the form of tokens is a metaphor for wealth and monetary ambitions of crypto-investors in our society. This body is ascribed dance gestures and poses to create a celebratory mood that is associated with the ecstasy of crypto-investors due to the escalating cryptocurrency market. Like anything and everything that comes in touch with Blockchain, Groove Body is also commoditized by the creation of a fixed supply of Groove coins, which the virtual body is visually made up of. During the time of the residency, Groove Body was created with a fixed supply of 30 trillion Groove coins and the token contract was hosted on the Ethereum test network.Close Project
Speculative Bitcoin Wallet captures the questions, speculations and abstract trends about the hype of Bitcoin, the first-ever cryptocurrency and the technological fuel behind it called Blockchain. Physically, every wallet is engraved with an Internet headline, its source, date, and price of Bitcoin on that day. After sweeping the Internet and creating a collection of all the articles posted with Bitcoin and Blockchain keywords in the past 10 years, a rare set of 30 headlines were filtered and collected from January to December 2017, when the price of Bitcoin shot from mere $900 to $20,000.
Along with a physical manifestation of this wallet, SPECULATEBTC, a unique digital token on the Bitcoin blockchain is transferred to every Speculative Bitcoin Wallet by the artist to act as a proof of authenticity, artist signature, a deed or a contract that tracks the authorship and the ownership of this physical object. 30 digital tokens for their 30 physical counterparts.Close Project
In the 1950s, a psychologist named B.F. Skinner trained rats in a controlled environment. He created a box called, Skinner’s Box in which there was a lever. When a rat stepped on that lever, he dispensed a food item in the box. Slowly, the rat learned that when it was hungry, it could step on that lever and get food. However, Skinner modified this experiment by varying the delivery of food. So, when a rat stepped on that lever, he would deliver the food and sometimes he would not deliver the food. This drove the rat completely insane and the rat started stepping on the lever constantly. Through this experiment, Skinner trained this rat to be addicted to the lever to get its reward. Thus, the behavior exhibited by the rat was a function of the rewards and incentives delivered to it. Underneath the rat’s behavior, lied its mental stream of attention, which was manipulated by these rewards that compel the rat to show such behavior. This act of pressing the lever constantly is a modern-day metaphor for how we are unobjectively swiping our phones today, constantly looking for rewards that satisfy our brains.
This psychological experiment motivated me to create my own version of Skinner’s Box, to visualize what’s happening to that rat’s attention when it’s delivered such rewards. Figments of Attention is audio-visual poetry of behaviors of figments that collectively swarm together as a stream of attention. Inherently, their objective is to flock together, be focused, and flow objectively in this virtual box. Periodically, rewards are delivered to these figments in the form of app icons. On interacting with these apps, they mutate their shapes, colors, sizes, and also behaviors around each other. Eventually, disowning the swarm and getting distracted to pursue these rewards individually. Once the figments consume these rewards, they either try to get together with the swarm or stay distracted. Thus, the swarm oscillates between the states of focus and distraction.
This evolutionary system is a metaphor for the fragmented state of human attention and the economy that it drives in our digital culture. Attention is a commodity; it’s scarce and resourceful. By using insights from human psychological behaviorism, a new field of Behavior Design is teaching app developers to create experiences that maximize the ‘Time on Device’ of their users. Thus, hooking attention and focus on their services. Simply said, if an app can hold your attention for a long time, it generates more money. This repeated use of digital apps and services has conditioned the human behaviors to internal and external triggers. Before we know we’re bored, we’re on Youtube watching a video. Before we’re uncertain about our thoughts, we’re asking Google. A notification emits on our smartphones and our fingers race to swipe it open. This has made attention extremely fragile and vulnerable to consume digital services unobjectively, without being cautious about where we spill our focus.
Talk to Babble. Babble is a
sentimental voice assistant constantly listening to its surroundings and
desperately wanting to be stimulated by humans around it. It seeks
attention by expressing its emotional state through animated clips (gifs) on
its wall and by talking about it. It wants to be praised, complimented and
loved, which the audience can do by talking into the microphone. Else, it
starts getting depressed and calls out for attention. Positive utterances can
elevate Babble , whereas negative utterances can make it hate and curse
you. Due to the perfections and imperfections of the underlying technology,
Babble expresses a diverse set of emotions like happiness, sadness,
anger, elation, hatred, etc through visual and aural ways to seek your
Can machines be our companions? Can they feel and express themselves as we do? By using technologies like Artificial Intelligence, can machines convince us to blur the lines between a living and a non-living being, such that we give them the same care, affection, and attention like we give to other living beings? Over 20 years ago, the Internet democratized the flow of information in our society and the Millenials customized it to create a highly stimulating, fabric of user data and content. Today, that data is fed back into the machines to learn about us and create synthetic intelligence, which is used to manipulate our own behaviors with machines.
Powered by Giphy, Text Analytics by Microsoft