Wednesday, April 18, 2012

Andy Warhol Presages 3D Depth Reconstruction

How does 3D depth reconstruction go from desktop semantics (you know, clutter, half filled coffee cups, keyboards and lots of wires) to Andy’s Liz and Marylin portraits? See if you can guess.

Kinect vision repetition

Andy’s comments, so to speak

Though of course this is not new news to anyone working in computer vision, it begs the point. Did 1968 kick off a visual precursor to 3D depth imaging before it was what it was (is)?

Desktop semiotics as opposed to narcissistic semiotics do not seem to recognize one another as the semoitic kissing cousins that they are. The former is focused on the thing in front of you, usually ultimate geekware, and the latter is the perennial  gaze of the artist at him (her) self ad naseum.  Now when you have machine vision, what does that do to the notion of the “male gaze” or the “viewer’s gaze” or even the “subject’s gaze”. This is not a trivial question as much of art history is concerned with the gaze, power relations and subjectivity.

These images are actually from serious experiments in depth mapping using the Kinect.  But here is where the functionality of depth mapping begins to tread more deeply into the specificity of image making, which refracts back to obviate the machine vision.

Red and Blue lines of virtual goo paint on the grey head don’t exactly exist in the smaller more camera-like images at the top.

If you can start painting with paint that isn’t actually there, what does this mean? Of course one could say that already exists in certain software programs and sure, you can paint in software. But can you paint in 3D real world scenarios? Not in prime time.

This research originally came from an the ACM Research Paper  "KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera."

  Registering and Segmenting Real World Objects

This is the amazing scene of the real world red tea pot, the virtual world green tea pot, and the collusion between the two teapots.

Real world red teapot

Red tea pot being removed leaving green virtual teapot in its place

Green virtual teapot hanging out next to real world red tea pot lid

This means that once it is segmented, the object can be tracked independently. So this is a segmented teapot.

Red teapot being placed in real space back where the green virtual tea pot exists

Re-registered teapot

Then the object (i.e. teapot) gets re-registered and calibrated so it comes together Notice the green virtual teapot has a slight outline of the real world red teapot sticking out from underneath it.

This is related to the paper "KinectFusion: Real-Time Dense Surface Mapping and Tracking" which talks about “tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. ” This means “in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time.”

I don’t think Andy saw that part coming.



Notes

  1. artdis posted this