Back to blogging in 2020!

CVPR brain dump, part 1 of n

Sitting on the floor outside the poster session -- so many ideas and whatnot have gotten into my head that I need to take time to get some of them out again. This will kind of be in reverse chronological order.

CVPR 2016

Is there someone in my current field of a view wearing a shirt supporting the bid to hold CVPR 2016 in Seattle? Yes! There is! The vote is tonight: Seattle vs. Los Angeles. (Also ladies vs. gentlemen with the Seattle organizing committee made almost entirely of women.) Someone left a pamphlet on a table promoting the LA side of things that contained a misleading snowcapped mountain range. I know those mountains (San Gabriel?) can sometimes have snow, but I think they're just jealous and had snowy mountain envy or something.

Bubbles!

There was a talk called Fine-Grained Crowdsourcing for Fine-Grained Recognition from Stanford that I went to because I wanted to see what "crowdsourcing" meant to these people. I've seen it mean a) "find things on the internet that people posted for their own personal reasons and use it for research" or b) "pay people to do perception-related tasks to explore how humans understand images" or c) "pay someone that's not a grad student to do a bunch of menial labor like image labeling or segmentation" or d) "attempt to disguise menial labor as fun so that you don't even have to pay" or e) "involve the crowd with something their kinda interested/personally invested in, possibly through a game". 

To my pleasant surprise, these folks had designed a pretty reasonable game which they simply paid people on mturk to play. The game involved categorizing a blurry black and white picture of an object/animal into one of two categories and using "bubbles"/circles to expose parts of the image in color and resolution to help you make a better distinction. The player's goal was to guess the category correctly and use as few bubbles as possible. The underlying goal is to learn which parts of an image are important for distinguishing very specific things, like between two similar breeds of birds. 

It's a lot like von Ahn's game Peek-a-boom, in which you expose parts of an image until your partner can guess what it is. In this bubbles game, the game mechanics of exposing only what you need to categorize an image are directly tied to the computer vision problem of figuring out which features/parts the computer should look at to make the same decision automatically.

They called it a "machine-human collaboration", which I am 100% in support of. The more we expose of how these algorithms work and why, the more we can let humans identify and correct the silly assumptions computers sometimes make, and have the humans guide the computers to success!

More to come!

SAVE and publish, damn you.... slow conference internet...

Comments