Back to blogging in 2020!

CVPR brain dump, part 5 of 5

I think this is going to be the last post about this year's CVPR! It'll be about the fun times I had at the "What makes a good ground truth dataset?" workshop, which is pretty relevant since I make and think about crowdsourcing games that change how people collect data to hopefully collect better data.

Here's a list of many of the datasets used in computer vision research. The field is essentially driven by datasets -- designing and evaluating algorithms to "work" on these datasets, and basing success and publishing on improving the numbers by itty bitty amounts. That's not necessarily a good way to do things, so this workshop was held to discuss how ground truth datasets get made, how they're used in research, and how to create new high quality datasets, and what the disadvantages and advantages are of letting these datasets drive the field.

General Ideas

  • Big data does not necessarily imply good data.
    • It needs to be correct.
    • And it needs to have high coverage/be representative of what data in the real world looks like.
    • All the data we need to solve every task is already out there (on the internet) waiting to be found and used; I think new data needs to be explicitly collected for certain tasks.
  • Data is biased.
    • Check out this Unbiased look at dataset bias.
    • We need more datasets with more universal coverage.
      • Have you heard of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) datasets? It's like psychology experiments that are said to generalize but might only apply to college undergrads. 
    • What are different ways to get better coverage?
  • Correctness of data is important.
    • It can be hard work to guarantee it's correct, and it can't always have that guarantee.
    • Crowdsourced data, especially, is hard to guarantee the correctness of.
  • With crowdsourcing in particular, these are some things we might want to get a better handle on:
    • What are good incentives? Money, curiosity, social connection...
    • What are ways to instruct users to contribute data?
    • What are ways to encourage certain things (e.g. contributing more useful data)?
    • What are the right kinds of feedback to give?
      • Feedback to show users what happened with their data and teach them to contribute more useful data
      • Feedback to encourage/motivate/delight

Middlebury Benchmarks

Daniel Scharstein gave an excellent talk about lessons from publishing and maintaining these Middlebury benchmarks. In addition to the datasets themselves, they also maintain an online benchmark evaluation where everyone can see how different methods compare, and it even captures "snapshots" of the state of the art research across time.
  • Good things about having benchmarks:
    • The community has a shared challenge to focus on.
    • The benchmarks can drive research. They were designed to be super challenging and then people find ways to solve these really challenging problems!
  • Bad things:
    • These datasets are pretty small, so it's hard to tell how general the solutions are. It's easy to overfit to the datasets.
    • People can focus too much on ranking.
    • People can focus too specifically on what the benchmark is about and not innovate in  valuable other directions...
      • because it's too outside the scope of what the rest of the vision community expects
      • or because there's nothing to evaluate on if moving in too new of a direction.

Middlebury + Computer Vision as one big game

These are some ideas and quotes that I couldn't help but relate to game design.
  • Obviously, the online ranking of how well your algorithm does in a benchmark is a straight up leaderboard. And that motivates a lot of people.
  • What Daniel said about maintaining the online benchmarks was how they had to put a lot of work into the UI, give it a nice compact representation, and nice visualizations of the results.
    • They had to make it easy to participate so that people would participate and find value in doing so.
  • He also said a number of things about making the datasets themselves:
    • "We're having a lot of fun creating these datasets!"
      • This was in reference to making a new stereo image dataset for specular objects, where they took a motorcylce, photographed it in all its shiny glory, and then covered it in matte clay spraypaint stuff and took more pictures including ground truth depth images.
      • The fun and challenge of capturing data is exactly what players experienced in PhotoCity and what I hope to recreate in my new project.
    • "Creating ground truth data is challenging and fun!"
      • This sounds similar to the quote above, but this was in reference to inventing new datasets to challenge the computer vision community. Exactly like designing more challenging levels for more skilled players. 

How to proceed

Carl Vondrick's last slide of the last talk of the workshop sums things up pretty nicely: 
  • share datasets
  • share annotation tools and code
  • share lessons

Comments