Exercise 00 - HomographyΒΆ

In this exercise you will program a simple script to use the getPerspective and warpPerspective methods of the opencv API. This is a simple building block, for example, to implement a sign detector and reader that could be used by a robot to locate itself. The solution should be use also the OpenCV's event handling features.

Note: Each time in a jupyter notebook you use imshow, at the end of the cell it is safer to use destroyAllWindows to avoid crashes.

In [1]:
import cv2
import numpy as np
In [2]:
# load image from file and show it in a opencv window
path = 'data/imgs4classes/campus_sign.jpg'

# Your code here 
# ....


cv2.waitKey(0) 
cv2.destroyAllWindows()
/home/thomas/Projects/computer vision
data/imgs4classes/campus_sign.jpg
In [7]:
# select four points on the image by mouse clicking
# Create a callback function for left mouse clicking

ref_points = list()
def on_click_event(event, x, y, flags, params):
    # Your code here 
    # ...
        
# register the callback
cv2.namedWindow('Image')
cv2.setMouseCallback('Image', on_click_event)

# loop until 'c' key is pressed, or four points have been collected
while True:
    # Your code here
    # ...
    
print(ref_points)
cv2.destroyAllWindows()
[(501, 89), (750, 17), (795, 633), (511, 608)]
In [5]:
# decide or select four points as the destination image
    # Your code here, e.g.:
    # trf_points = [(0,0), (400,0), (400,600), (0,600) ]
    # or you can find an interactive way to handle them...
    
# get the perspective transform
H = # your code here
print(H)
[[ 3.36899621e+00 -5.22324994e-02 -1.68648294e+03]
 [ 6.84124658e-01  2.27129386e+00 -5.50118320e+02]
 [ 1.46649686e-03  3.53411838e-04  1.00000000e+00]]
In [9]:
# Warp perspective and show final result
image = cv2.imread(path)
new_image = # your code here

cv2.waitKey(0)
cv2.destroyAllWindows()