《深度學習——Andrew Ng》第四課第四周程式設計作業_1_人臉識別
Face Recognition for the Happy House.
人臉識別可以分為兩個方向,人臉識別(1:1),人臉驗證(1:n):
- Face Verification - “is this the claimed person?”. For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
- Face Recognition - “who is this person?”. For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
原理
triplet loss
人臉識別變換為二分類問題
“如下網路,一個放入基準照片,另一個放入測試照片,然後將兩者的f(x),即encoding傳入邏輯迴歸網路,做二分類的判斷。實際操作中,基準照片的encoding可以事先準備好,來節省計算開銷。例如有1000個員工,只需要用測試員工照片得到encoding值,然後依次和這1000個encoding值做對比。”
python程式
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks import *
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import datetime
np.set_printoptions(threshold=np.nan)
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha=0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)))
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)))
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0))
### END CODE HERE ###
return loss
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding-database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database:
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linga.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
if __name__ == '__main__':
starttime = datetime.datetime.now()
################################################
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
FRmodel.compile(optimizer='adam', loss=triplet_loss, metrics=['accuracy'])
load_weights_from_FaceNet(FRmodel)
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
dist1, open1 = verify("images/camera_0.jpg", "younes", database, FRmodel)
dist2, open2 = verify("images/camera_2.jpg", "kian", database, FRmodel)
print(dist1, dist2)
dist3, identity3 = who_is_it("images/camera_0.jpg", database, FRmodel)
print(dist3, identity3)
#################################################
endtime = datetime.datetime.now()
print("the running time :" + str((endtime - starttime).seconds))
print("END!")
What you should remember:
- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
- The same encoding can be used for verification and recognition. Measuring distances between two images’ encodings allows you to determine whether they are pictures of the same person.
模型視覺化
使用keres框架自帶的模型視覺化工具,得到如下結果,中間並行的是GoogleNet(inception_block):
相關文章
- 《深度學習——Andrew Ng》第四課第四周程式設計作業_2_神經網路風格遷移深度學習程式設計神經網路
- Andrew ng 深度學習課程筆記深度學習筆記
- 《深度學習——Andrew Ng》第五課第一週程式設計作業_1_Building a RNN Step by Step深度學習程式設計UIRNN
- 團隊作業—第四周—物件導向程式設計物件程式設計
- 團隊作業-第四周-物件導向程式設計物件程式設計
- Andrew NG 深度學習課程筆記:梯度下降與向量化操作深度學習筆記梯度
- Andrew NG 深度學習課程筆記:神經網路、有監督學習與深度學習深度學習筆記神經網路
- 01神經網路和深度學習-Deep Neural Network for Image Classification: Application-第四周程式設計作業2神經網路深度學習APP程式設計
- 第四周-雲端計算運維作業運維
- 學習Java第四周Java
- 基於深度學習的人臉識別系統系列(Caffe+OpenCV+Dlib)——【六】設計人臉識別的識別類深度學習OpenCV
- 01神經網路和深度學習-Building your Deep Neural Network: Step by Step-第四周程式設計作業1神經網路深度學習UI程式設計
- 《計算機基礎與程式設計》第四周學習總結計算機程式設計
- 高階語言程式設計課程第四次作業程式設計
- Java程式設計第四章作業Java程式設計
- 高階語言程式設計課程第四次個人作業程式設計
- 學習Java的第四周Java
- 第四周學習總結
- Java課堂 第四周Java
- [第八組]第四周作業3
- Andrew NG 深度學習課程筆記:二元分類與 Logistic 迴歸深度學習筆記
- 第八組【團隊作業】第四周作業2
- 第三組【團隊作業】第四周作業3
- 第五組【團隊作業】第四周作業2
- 第九組【團隊作業】第四周作業2
- 第一組【團隊作業】第四周作業2
- 第一組【團隊作業】第四周作業1
- jQuery第四章課後作業jQuery
- Web前端學習總結第四周Web前端
- 【UI】第四周 設計基礎-CSDN就業班-專題視訊課程UI就業
- 高階語言程式設計作業第四次程式設計
- 高階語言程式設計第四次作業程式設計
- 第四次高階程式語言設計作業
- [OpenCV實戰]1 基於深度學習識別人臉性別和年齡OpenCV深度學習
- 第四周:選擇結構的程式設計的習題(北理)程式設計
- PHP 第四周函式學習記錄PHP函式
- 20145302張薇《Java程式設計》第四周學習總結Java程式設計
- 高階語言程式設計第四次個人作業程式設計