Web-Capture



For ARTD 410: Art and Surveillence
Applying a motion capture to three public webcams from Japan, Czech Republic, and a doggy daycare centered in San Diego California.

I’ve been interested in how computers see the world. Do they really understand what they’re seeing on camera? While this program doesn’t outright identify what its seeing, the program is able to identify movement and the change in RGB pixel count. The result is a view of a reality only a computer could conjure. The RGB pixel counts create a tone that change when a new object is introduced into the screen. When that same object is detected to move, a blob detection will outline that object in red.
While I find Processing to be a somewhat limiting program, it’s helped me develop this project and expand my interests in computer “sight” and how to translate it to human comprehension. When a person views the world they have background information to help inform what they’re seeing, and as a result able to construct meaning. The computer, however, doesn’t have that sort of luxury. What the computer does is it constructs meaning from what its directly detecting without any background information to supplement its detection.
Knowing this, my project touches upon this concept. While the computer is able to detect humans and dogs, it doesn’t recognize them as such. What we identify on screen is because of our own background information supplementing our view. We know that these are humans and dogs in certain places. We project our own meaning onto the actions on the webcam as we watch these people and animals mill about and move about with their own purpose. However, the computer doesn’t see these movements as meaningful. It doesn’t have that background information and the weight of importance like how we project onto the people on screen. Its “sight” is applied in a different way as it applies meaning into movement of, what it assumes to be, arbitrary things.

Likes: 0

Viewed: 9

source