Tsunami Relief

Clicking down a street, a building crumbles. I walked here, past this house, with friends. Now I see no people. In my memory faces are absent. The same GPS co-ordinates, but not the same place. Where is the crumbled building? The image is filled with undefined things.

I experience a glitch in Google’s rendering of space, traveling from 2011 to 2013. I was exploring the places I mediated with my body and camera in Ishinomaki, Japan in June 2011. As I experience Google’s images my thoughts time travel too. Spaces were dramatically changed from how I remembered them. Holes had been patched, mud had been shovelled, cars and boats returned to roads and seas. I travelled between 2011 and 2013 with each click. I thought through the conflict between how I remembered the images, how my images remembered, and how Google’s images presented memory. The ecological disaster had been forgotten.

After the Tsunami, many people threw away their possessions. I present boxes of unharmed china and metal objects to families. The translator says they’re happy so many people have offered to help them. The government wants to knock down their houses and start again, but they’d like to keep them. They bring us lunch. They say to throw away everything I find with the moulding rice paper screens, the dank tatami mats, and the wet drywall. Nobody wants to keep the affected objects. I learned from Anthropology that the Japanese should keep the gifts they receive. Something about luck, or “domestic spirituality” in Anthropologese. I think the soiled objects must have become unlucky.

Street View seems to present someplace, but its images are always watermarked with a universe beyond the frame. The time travelling click momentarily unveils the algorithmic mediator to me. Beneath their surfaces, the photographs are numbers. They don’t follow the complex recipe for human experience. Roland Barthes said photographs are seared by the here and now. For Street View’s images the here and now must be numerical.

I click through Ishinomaki. I find where I walked, or at least where my images say I did. I change my perspective, panning around the landscape. Places anticipate us in numerical dormancy on whirling servers, wired into an electrical grid, somewhere. The hot air they generate is suctioned out of the room and into the world. With each click I hear them whirl faster, the warm, chemically clean perfume grows stronger. I position myself in the spot where my images say I was. I take another image, matching the frame and taking a screen capture. On my computer, I animate the digital images from 2011 and 2013, aligning their shared geometric shapes.

Google provides access to information. Like early anthropologists, they provide a truthful account of the exotic. Google’s exotic is numerical and relational, but just as difficult to understand as the social practices that were once placed on the fringes of what it was to be human. Google Search captures an image of the World Wide Web, using automated software bots to understand link structure and how all of the pages interact. Search results organise the static connections between computers so I can discover them.

Guided by satellites and digital maps, panoramic cameras on Google cars capture landscapes. I think about the Google person driving this Google car with this autonomous panoramic camera on top of them. We need digital images to interact with computers, but they are unnecessary in computer logic. Like sillica packets and moisture, the image’s digital artefacts absorb humanness. The servers whirl faster. A seaweed encrusted car is removed from its sculptural installation next to a dealership. A crane hoists a car out of a graveyard, diesel fumes suctioned out of its hot engine and into the world. The cleansed landscape misremembers our ecological war of attrition.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.