ICCV Workshop

From Vision Wiki
Jump to navigation Jump to search

Caltech 256 Challenge

Confusion in a hierarchical context: When your algorithms fail, are they robust? Does the failure show generalization?

To Do:

  1. build/find tree(s)
  2. metric from tree(s)
  3. Black box :confusion matrix -> performance
  4. make new test set (or should we just parcel up 256? and trust people not to cheat, PASCAL?)

Example approach with scores for comparison

  • Pyramid matching for baseline

To find out:

  • How are they running their challenge? (releasing datasets)

Link to Caltech-256 web page

Things To Put On The Page

  • The Trees themselves
  • Lists of Training and Test Sets
    • Correspond to a fixed set of random number seeds (used in the scripts)
  • Is there a secret training set?
    • How do we use it?
    • Might not be necessary if we average over many trials
  • Condense 256 paper down to one summary page
  • Descripe the new [what to call it] Metric
  • Scripts
    • Choosing files
    • Loading images
    • Generating classifier
    • SVM?
    • Confusion matrices
    • [what to call it] Metric

Possible Trees To Use

  • WordNet: PASCAL has some WordNet software and so does Merrielle
  • Human-made (like the one in the Caltech256 paper)
  • For plants and animals, could use kingdom, phylum, class charts

PASCAL CHALLENGE

Link to the 2007 Challenge

Link to the Visual PASCAL Challenge Home Page

Notes from Alex Berg on duplicates