Indexing local features and instance recognition

Similar documents
Indexing local features. Wed March 30 Prof. Kristen Grauman UT-Austin

Instance Recognition. Jia-Bin Huang Virginia Tech ECE 6554 Advanced Computer Vision

Generic object recognition

BBM 413 Fundamentals of Image Processing Dec. 11, Erkut Erdem Dept. of Computer Engineering Hacettepe University. Segmentation Part 1

CS 1674: Intro to Computer Vision. Face Detection. Prof. Adriana Kovashka University of Pittsburgh November 7, 2016

CS 1674: Intro to Computer Vision. Intro to Recognition. Prof. Adriana Kovashka University of Pittsburgh October 24, 2016

Lecture 5: Clustering and Segmentation Part 1

CS 1699: Intro to Computer Vision. Introduction. Prof. Adriana Kovashka University of Pittsburgh September 1, 2015

A TEXT RETRIEVAL APPROACH TO CONTENT-BASED AUDIO RETRIEVAL

CS 2770: Computer Vision. Introduction. Prof. Adriana Kovashka University of Pittsburgh January 5, 2017

ECS 189G: Intro to Computer Vision March 31 st, Yong Jae Lee Assistant Professor CS, UC Davis

Lecture 5: Clustering and Segmenta4on Part 1

Summarizing Long First-Person Videos

2. Problem formulation

Outline. Why do we classify? Audio Classification

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

TRAFFIC SURVEILLANCE VIDEO MANAGEMENT SYSTEM

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

Modeling memory for melodies

Gender and Age Estimation from Synthetic Face Images with Hierarchical Slow Feature Analysis

New-Generation Scalable Motion Processing from Mobile to 4K and Beyond

LEARNING AUDIO SHEET MUSIC CORRESPONDENCES. Matthias Dorfer Department of Computational Perception

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Image Steganalysis: Challenges

Commissioning and Initial Performance of the Belle II itop PID Subdetector

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

An Introduction to Deep Image Aesthetics

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Computer Vision for HCI. Image Pyramids. Image Pyramids. Multi-resolution image representations Useful for image coding/compression

Video coding standards

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

Subjective Similarity of Music: Data Collection for Individuality Analysis

An Overview of Video Coding Algorithms

Lecture 1: Introduction & Image and Video Coding Techniques (I)

DeepID: Deep Learning for Face Recognition. Department of Electronic Engineering,

Detecting Musical Key with Supervised Learning

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Symbol Classification Approach for OMR of Square Notation Manuscripts

MidiFind: Fast and Effec/ve Similarity Searching in Large MIDI Databases

Detection of Panoramic Takes in Soccer Videos Using Phase Correlation and Boosting

METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING

Enhancing Music Maps

Deep Neural Networks Scanning for patterns (aka convolutional networks) Bhiksha Raj

Wyner-Ziv Coding of Motion Video

Free Viewpoint Switching in Multi-view Video Streaming Using. Wyner-Ziv Video Coding

Singer Recognition and Modeling Singer Error

Robert Alexandru Dobre, Cristian Negrescu

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

IEEE Santa Clara ComSoc/CAS Weekend Workshop Event-based analog sensing

Off-line Handwriting Recognition by Recurrent Error Propagation Networks

Searching for Similar Phrases in Music Audio

Predicting Aesthetic Radar Map Using a Hierarchical Multi-task Network

Hidden Markov Model based dance recognition

PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Using enhancement data to deinterlace 1080i HDTV

Time Series Models for Semantic Music Annotation Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet

Package spotsegmentation

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

A Discriminative Approach to Topic-based Citation Recommendation

Hearing Sheet Music: Towards Visual Recognition of Printed Scores

Audio Feature Extraction for Corpus Analysis

SIMSSA DB: A Database for Computational Musicological Research

Chapter 2 Introduction to

TERRESTRIAL broadcasting of digital television (DTV)

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

Analysis of Visual Similarity in News Videos with Robust and Memory-Efficient Image Retrieval

CS 7643: Deep Learning

Neural Network for Music Instrument Identi cation

EBU Digital AV Sync and Operational Test Pattern

Processes for the Intersection

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

Story Tracking in Video News Broadcasts. Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Beam test of the QMB6 calibration board and HBU0 prototype

APPLICATIONS OF DIGITAL IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVED

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

Improving Performance in Neural Networks Using a Boosting Algorithm

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Automatic retrieval of visual continuity errors in movies

Inverse Filtering by Signal Reconstruction from Phase. Megan M. Fuller

Audio-Based Video Editing with Two-Channel Microphone

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Chapter 10 Basic Video Compression Techniques

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

complex than coding of interlaced data. This is a significant component of the reduced complexity of AVS coding.

Adaptive Key Frame Selection for Efficient Video Coding

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

StaMPS Persistent Scatterer Practical

A Comparison of Peak Callers Used for DNase-Seq Data

Chord Classification of an Audio Signal using Artificial Neural Network

CS229 Project Report Polyphonic Piano Transcription

Detection and demodulation of non-cooperative burst signal Feng Yue 1, Wu Guangzhi 1, Tao Min 1

Predicting Variation of Folk Songs: A Corpus Analysis Study on the Memorability of Melodies Janssen, B.D.; Burgoyne, J.A.; Honing, H.J.

Transcription:

Indexing local features and instance recognition May 14 th, 2015 Yong Jae Lee UC Davis

Announcements PS2 due Saturday 11:59 am 2

Approximating the Laplacian We can approximate the Laplacian with a difference of Gaussians; more efficient to implement. ( (,, ) (,, )) 2 L= Gxx xy + Gyy xy σ σ σ (Laplacian) DoG= Gxyk (,, σ) Gxy (,, σ) (Difference of Gaussians) 3

Recap: Features and filters Transforming and describing images; textures, colors, edges 4 Kristen Grauman

Recap: Grouping & fitting Clustering, segmentation, fitting; what parts belong together? [fig from Shi et al] 5 Kristen Grauman

Recognition and learning Recognizing objects and categories, learning techniques 6 Kristen Grauman

Matching local features 7 Kristen Grauman

Matching local features? Image 1 Image 2 To generate candidate matches, find patches that have the most similar appearance (e.g., lowest SSD) Simplest approach: compare them all, take the closest (or closest k, or within a thresholded distance) 8 Kristen Grauman

Indexing local features 9 Kristen Grauman

Indexing local features Each patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT) Descriptor s feature space 10 Kristen Grauman

Indexing local features When we see close points in feature space, we have similar descriptors, which indicates similar local content. Descriptor s feature space Query image Database images 11 Kristen Grauman

Indexing local features With potentially thousands of features per image, and hundreds to millions of images to search, how to efficiently find those that are relevant to a new image? 12 Kristen Grauman

Indexing local features: inverted file index For text documents, an efficient way to find all pages on which a word occurs is to use an index We want to find all images in which a feature occurs. To use this idea, we ll need to map our features to visual words. 13 Kristen Grauman

Text retrieval vs. image search What makes the problems similar, different? 14 Kristen Grauman

Visual words: main idea Extract some local features from a number of images e.g., SIFT descriptor space: each point is 128-dimensional Slide credit: D. Nister, CVPR 2006 15

Visual words: main idea 16

Visual words: main idea 17

Visual words: main idea 18

Each point is a local descriptor, e.g. SIFT vector. 19

20

Visual words Map high-dimensional descriptors to tokens/words by quantizing the feature space Quantize via clustering, let cluster centers be the prototype words Word #2 Descriptor s feature space Determine which word to assign to each new image region by finding the closest cluster center. 21 Kristen Grauman

Visual words Example: each group of patches belongs to the same visual word Figure from Sivic & Zisserman, ICCV 2003 22 Kristen Grauman

Visual words and textons First explored for texture and material representations Texton = cluster center of filter responses over collection of images Describe textures and materials based on distribution of prototypical texture elements. Leung & Malik 1999; Varma & Zisserman, 2002 23 Kristen Grauman

Recall: Texture representation example Windows with primarily horizontal edges Dimension 2 (mean d/dy value) Dimension 1 (mean d/dx value) Windows with small gradient in both directions Both Windows with primarily vertical edges mean d/dx value mean d/dy value Win. #1 4 10 Win.#2 18 7 Win.#9 20 20 statistics to summarize patterns in small windows 24 Kristen Grauman

Visual vocabulary formation Issues: Sampling strategy: where to extract features? Clustering / quantization algorithm Unsupervised vs. supervised What corpus provides features (universal vocabulary?) Vocabulary size, number of words 25 Kristen Grauman

Inverted file index Database images are loaded into the index mapping words to image numbers 26 Kristen Grauman

Inverted file index When will this give us a significant gain in efficiency? New query image is mapped to indices of database images that share a word. 27 Kristen Grauman

If a local image region is a visual word, how can we summarize an image (the document)? 28

Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the sensory, retinal image brain, was transmitted point by point visual, to centers perception, in the brain; the cerebral cortex was a movie screen, so to speak, upon which retinal, the image cerebral in the eye was cortex, projected. Through the discoveries eye, cell, of Hubel optical and Wiesel we now know that behind the origin of the visual perception in the brain nerve, there image is a considerably more complicated Hubel, course of Wiesel events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. China is forecasting a trade surplus of $90bn ( 51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. China, The trade, figures are likely to further annoy surplus, the US, which commerce, has long argued that China's exports are unfairly helped by a deliberately exports, undervalued imports, yuan. Beijing US, agrees the surplus yuan, is too high, bank, but says domestic, the yuan is only one factor. Bank of China governor Zhou Xiaochuan said foreign, the country increase, also needed to do more to boost domestic trade, demand value so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. 29 ICCV 2005 short course, L. Fei-Fei

30

Bags of visual words Summarize entire image based on its distribution (histogram) of word occurrences. Analogous to bag of words representation commonly used for documents. 31

Comparing bags of words Rank frames by normalized scalar product between their occurrence counts---nearest neighbor search for similar images. [1 8 1 4] [5 1 1 0] ssssss dd jj, qq = dd jj, qq dd jj qq = VV ii=1 dd jj ii qq(ii) VV ii=1 dd jj (ii) 2 VV ii=1 qq(ii) 2 d j q for vocabulary of V words 32 Kristen Grauman

Bags of words for content-based image retrieval Slide from Andrew Zisserman Sivic & Zisserman, ICCV 2003 33

Slide from Andrew Zisserman Sivic & Zisserman, ICCV 2003 34

Scoring retrieval quality Query Database size: 10 images Relevant (total): 5 images Results (ordered): precision = #relevant / #returned recall = #relevant / #total relevant 1 0.8 precision 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 recall 35 Slide credit: Ondrej Chum

Vocabulary Trees: hierarchical clustering Tree construction: for large vocabularies [Nister & Stewenius, CVPR 06] 36 Slide credit: David Nister

Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] 37 Slide credit: David Nister

Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] 38 Slide credit: David Nister

Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] 39 Slide credit: David Nister

Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] 40 Slide credit: David Nister

Vocabulary Tree Training: Filling the tree [Nister & Stewenius, CVPR 06] 41 Slide credit: David Nister

What is the computational advantage of the hierarchical representation bag of words, vs. a flat vocabulary? 42

Vocabulary Tree Recognition RANSAC verification [Nister & Stewenius, CVPR 06] 43 Slide credit: David Nister

Bags of words: pros and cons + flexible to geometry / deformations / viewpoint + compact summary of image content + provides vector representation for sets + good results in practice - basic model ignores geometry must verify afterwards, or encode via features - background and foreground mixed when bag covers whole image - optimal vocabulary formation remains unclear 44

Summary So Far Matching local invariant features: useful to provide matches to find objects and scenes. Bag of words representation: quantize feature space to make discrete set of visual words Inverted index: pre-compute index to enable faster search at query time 45

Instance recognition Motivation visual search Visual words quantization, index, bags of words Spatial verification affine; RANSAC, Hough Other text retrieval tools tf-idf, query expansion Example applications 46

Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? 47 Kristen Grauman

Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? 48 Kristen Grauman

Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? 49 Kristen Grauman

Vocabulary size Results for recognition task with 6347 images Branching factors 50 Nister & Stewenius, CVPR 2006 Kristen Grauman

Instance recognition: remaining issues How to summarize the content of an entire image? And gauge overall similarity? How large should the vocabulary be? How to perform quantization efficiently? Is having the same set of visual words enough to identify the object/scene? How to verify spatial agreement? How to score the retrieval results? 51 Kristen Grauman

Spatial Verification Query Query DB image with high BoW similarity DB image with high BoW similarity Both image pairs have many visual words in common. 52 Slide credit: Ondrej Chum

Spatial Verification Query Query DB image with high BoW similarity DB image with high BoW similarity Only some of the matches are mutually consistent 53 Slide credit: Ondrej Chum

Spatial Verification: two basic strategies RANSAC Typically sort by BoW similarity as initial filter Verify by checking support (inliers) for possible transformations e.g., success if find a transformation with > N inlier correspondences Generalized Hough Transform Let each matched feature cast a vote on location, scale, orientation of the model object Verify parameters with enough votes 54 Kristen Grauman

RANSAC verification 55

Recall: Fitting an affine transformation ), ( i i y x ), ( i x i y + = 2 1 4 3 2 1 t t y x m m m m y x i i i i = i i i i i i y x t t m m m m y x y x 2 1 4 3 2 1 1 0 0 0 0 1 0 0 Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras. 56

RANSAC verification 57

Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial Video Google System 1. Collect all words within query region 2. Inverted file index to find relevant frames 3. Compare word counts 4. Spatial verification Sivic & Zisserman, ICCV 2003 Demo online at : http://www.robots.ox.ac.uk/~vgg/r esearch/vgoogle/index.html Query region Retrieved frames Kristen Grauman 58

Example Applications Visual Perceptual Object and Recognition Sensory Augmented Tutorial Computing Mobile tourist guide Self-localization Object/building recognition Photo/video augmentation [Quack, Leibe, Van Gool, CIVR 08] 59

Application: Large-Scale Retrieval Visual Perceptual Object and Recognition Sensory Augmented Tutorial Computing Query Results from 5k Flickr images (demo available for 100k set) [Philbin CVPR 07] 60

Web Demo: Movie Poster Recognition Visual Perceptual Object and Recognition Sensory Augmented Tutorial Computing 50 000 movie posters indexed Query-by-image from mobile phone available in Switzerland http://www.kooaba.com/en/products_engine.html# 61

62

Spatial Verification: two basic strategies RANSAC Typically sort by BoW similarity as initial filter Verify by checking support (inliers) for possible transformations e.g., success if find a transformation with > N inlier correspondences Generalized Hough Transform Let each matched feature cast a vote on location, scale, orientation of the model object Verify parameters with enough votes 63 Kristen Grauman

Adapted from Lana Lazebnik Voting: Generalized Hough Transform If we use scale, rotation, and translation invariant local features, then each feature match gives an alignment hypothesis (for scale, translation, and orientation of model in image). Model Novel image 64

Voting: Generalized Hough Transform A hypothesis generated by a single match may be unreliable, So let each match vote for a hypothesis in Hough space Model Novel image 65

David G. Lowe. "Distinctive image features from scale-invariant keypoints. 66 IJCV 60 (2), pp. 91-110, 2004. Slide credit: Lana Lazebnik Gen Hough Transform details (Lowe s system) Training phase: For each model feature, record 2D location, scale, and orientation of model (relative to normalized feature frame) Test phase: Let each match btwn a test SIFT feature and a model feature vote in a 4D Hough space Use broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times image size for location Vote for two closest bins in each dimension Find all bins with at least three votes and perform geometric verification Estimate least squares affine transformation Search for additional features that agree with the alignment

Example result Background subtract for model boundaries [Lowe] Objects recognized, Recognition in spite of occlusion 67

Recall: difficulties of voting Noise/clutter can lead to as many votes as true target Bin size for the accumulator array must be chosen carefully In practice, good idea to make broad bins and spread votes to nearby bins, since verification stage can prune bad vote peaks. 68

Gen Hough vs RANSAC GHT Single correspondence -> vote for all consistent parameters Represents uncertainty in the model parameter space Linear complexity in number of correspondences and number of voting cells; beyond 4D vote space impractical Can handle high outlier ratio RANSAC Minimal subset of correspondences to estimate model -> count inliers Represents uncertainty in image space Must search all data points to check for inliers each iteration Scales better to high-d parameter spaces 69 Kristen Grauman

Questions? See you Tuesday! 70