Cozmo人工智慧機器人SDK使用筆記(3)-視覺部分vision
關於機器人感知-視覺部分,有過一次公開分享,講稿全文和視屏實錄,參考如下CSDN連結:
機器人感知-視覺部分(Robotic Perception-Vision Section):
https://blog.csdn.net/ZhangRelay/article/details/81352622
Cozmo視覺Vision也可以完成很多功能,寵物、方塊、人臉等識別和跟蹤等,非常有趣。
中文
英文
這就是教程tutorials中第三部分vision中的內容。
1. light when face
當檢測到人臉在影象中識別並點亮cozmo背部的LED燈。
#!/usr/bin/env python3
# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Wait for Cozmo to see a face, and then turn on his backpack light.
This is a script to show off faces, and how they are easy to use.
It waits for a face, and then will light up his backpack when that face is visible.
'''
import asyncio
import time
import cozmo
def light_when_face(robot: cozmo.robot.Robot):
'''The core of the light_when_face program'''
# Move lift down and tilt the head up
robot.move_lift(-3)
robot.set_head_angle(cozmo.robot.MAX_HEAD_ANGLE).wait_for_completed()
face = None
print("Press CTRL-C to quit")
while True:
if face and face.is_visible:
robot.set_all_backpack_lights(cozmo.lights.blue_light)
else:
robot.set_backpack_lights_off()
# Wait until we we can see another face
try:
face = robot.world.wait_for_observed_face(timeout=30)
except asyncio.TimeoutError:
print("Didn't find a face.")
return
time.sleep(.1)
cozmo.run_program(light_when_face, use_viewer=True, force_viewer_on_top=True)
2. face follower
識別人臉並跟隨,控制頭部角度和履帶運動調整是人臉處於採集影象的中間位置(x,y兩軸)。
#!/usr/bin/env python3
# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Make Cozmo turn toward a face.
This script shows off the turn_towards_face action. It will wait for a face
and then constantly turn towards it to keep it in frame.
'''
import asyncio
import time
import cozmo
def follow_faces(robot: cozmo.robot.Robot):
'''The core of the follow_faces program'''
# Move lift down and tilt the head up
robot.move_lift(-3)
robot.set_head_angle(cozmo.robot.MAX_HEAD_ANGLE).wait_for_completed()
face_to_follow = None
print("Press CTRL-C to quit")
while True:
turn_action = None
if face_to_follow:
# start turning towards the face
turn_action = robot.turn_towards_face(face_to_follow)
if not (face_to_follow and face_to_follow.is_visible):
# find a visible face, timeout if nothing found after a short while
try:
face_to_follow = robot.world.wait_for_observed_face(timeout=30)
except asyncio.TimeoutError:
print("Didn't find a face - exiting!")
return
if turn_action:
# Complete the turn action if one was in progress
turn_action.wait_for_completed()
time.sleep(.1)
cozmo.run_program(follow_faces, use_viewer=True, force_viewer_on_top=True)
3. annotate
此示例使用tkviewer在螢幕上顯示帶註釋的攝像頭影象並使用兩種不同的方法新增了一些自己的自定義註釋。
#!/usr/bin/env python3
# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Display a GUI window showing an annotated camera view.
Note:
This example requires Python to have Tkinter installed to display the GUI.
It also requires the Pillow and numpy python packages to be pip installed.
The :class:`cozmo.world.World` object collects raw images from Cozmo's camera
and makes them available as a property (:attr:`~cozmo.world.World.latest_image`)
and by generating :class:`cozmo.world.EvtNewCamerImages` events as they come in.
Each image is an instance of :class:`cozmo.world.CameraImage` which provides
access both to the raw camera image, and to a scalable annotated image which
can show where Cozmo sees faces and objects, along with any other information
your program may wish to display.
This example uses the tkviewer to display the annotated camera on the screen
and adds a couple of custom annotations of its own using two different methods.
'''
import sys
import time
try:
from PIL import ImageDraw, ImageFont
except ImportError:
sys.exit('run `pip3 install --user Pillow numpy` to run this example')
import cozmo
# Define an annotator using the annotator decorator
@cozmo.annotate.annotator
def clock(image, scale, annotator=None, world=None, **kw):
d = ImageDraw.Draw(image)
bounds = (0, 0, image.width, image.height)
text = cozmo.annotate.ImageText(time.strftime("%H:%m:%S"),
position=cozmo.annotate.TOP_LEFT)
text.render(d, bounds)
# Define another decorator as a subclass of Annotator
class Battery(cozmo.annotate.Annotator):
def apply(self, image, scale):
d = ImageDraw.Draw(image)
bounds = (0, 0, image.width, image.height)
batt = self.world.robot.battery_voltage
text = cozmo.annotate.ImageText('BATT %.1fv' % batt, color='green')
text.render(d, bounds)
def cozmo_program(robot: cozmo.robot.Robot):
robot.world.image_annotator.add_static_text('text', 'Coz-Cam', position=cozmo.annotate.TOP_RIGHT)
robot.world.image_annotator.add_annotator('clock', clock)
robot.world.image_annotator.add_annotator('battery', Battery)
time.sleep(2)
print("Turning off all annotations for 2 seconds")
robot.world.image_annotator.annotation_enabled = False
time.sleep(2)
print('Re-enabling all annotations')
robot.world.image_annotator.annotation_enabled = True
# Disable the face annotator after 10 seconds
time.sleep(10)
print("Disabling face annotations (light cubes still annotated)")
robot.world.image_annotator.disable_annotator('faces')
# Shutdown the program after 100 seconds
time.sleep(100)
cozmo.run_program(cozmo_program, use_viewer=True, force_viewer_on_top=True)
4. exposure
此示例演示了使用自動曝光和手動曝光Cozmo的攝像頭影象。當前的攝像頭設定會疊加到PC上檢視器視窗。
#!/usr/bin/env python3
# Copyright (c) 2017 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Demonstrate the manual and auto exposure settings of Cozmo's camera.
This example demonstrates the use of auto exposure and manual exposure for
Cozmo's camera. The current camera settings are overlayed onto the camera
viewer window.
'''
import sys
import time
try:
from PIL import ImageDraw, ImageFont
import numpy as np
except ImportError:
sys.exit('run `pip3 install --user Pillow numpy` to run this example')
import cozmo
# A global string value to display in the camera viewer window to make it more
# obvious what the example program is currently doing.
example_mode = ""
# An annotator for live-display of all of the camera info on top of the camera
# viewer window.
@cozmo.annotate.annotator
def camera_info(image, scale, annotator=None, world=None, **kw):
d = ImageDraw.Draw(image)
bounds = [3, 0, image.width, image.height]
camera = world.robot.camera
text_to_display = "Example Mode: " + example_mode + "\n\n"
text_to_display += "Fixed Camera Settings (Calibrated for this Robot):\n\n"
text_to_display += 'focal_length: %s\n' % camera.config.focal_length
text_to_display += 'center: %s\n' % camera.config.center
text_to_display += 'fov: <%.3f, %.3f> degrees\n' % (camera.config.fov_x.degrees,
camera.config.fov_y.degrees)
text_to_display += "\n"
text_to_display += "Valid exposure and gain ranges:\n\n"
text_to_display += 'exposure: %s..%s\n' % (camera.config.min_exposure_time_ms,
camera.config.max_exposure_time_ms)
text_to_display += 'gain: %.3f..%.3f\n' % (camera.config.min_gain,
camera.config.max_gain)
text_to_display += "\n"
text_to_display += "Current settings:\n\n"
text_to_display += 'Auto Exposure Enabled: %s\n' % camera.is_auto_exposure_enabled
text_to_display += 'Exposure: %s ms\n' % camera.exposure_ms
text_to_display += 'Gain: %.3f\n' % camera.gain
color_mode_str = "Color" if camera.color_image_enabled else "Grayscale"
text_to_display += 'Color Mode: %s\n' % color_mode_str
text = cozmo.annotate.ImageText(text_to_display,
position=cozmo.annotate.TOP_LEFT,
line_spacing=2,
color="white",
outline_color="black", full_outline=True)
text.render(d, bounds)
def demo_camera_exposure(robot: cozmo.robot.Robot):
global example_mode
# Ensure camera is in auto exposure mode and demonstrate auto exposure for 5 seconds
camera = robot.camera
camera.enable_auto_exposure()
example_mode = "Auto Exposure"
time.sleep(5)
# Demonstrate manual exposure, linearly increasing the exposure time, while
# keeping the gain fixed at a medium value.
example_mode = "Manual Exposure - Increasing Exposure, Fixed Gain"
fixed_gain = (camera.config.min_gain + camera.config.max_gain) * 0.5
for exposure in range(camera.config.min_exposure_time_ms, camera.config.max_exposure_time_ms+1, 1):
camera.set_manual_exposure(exposure, fixed_gain)
time.sleep(0.1)
# Demonstrate manual exposure, linearly increasing the gain, while keeping
# the exposure fixed at a relatively low value.
example_mode = "Manual Exposure - Increasing Gain, Fixed Exposure"
fixed_exposure_ms = 10
for gain in np.arange(camera.config.min_gain, camera.config.max_gain, 0.05):
camera.set_manual_exposure(fixed_exposure_ms, gain)
time.sleep(0.1)
# Switch back to auto exposure, demo for a final 5 seconds and then return
camera.enable_auto_exposure()
example_mode = "Mode: Auto Exposure"
time.sleep(5)
def cozmo_program(robot: cozmo.robot.Robot):
robot.world.image_annotator.add_annotator('camera_info', camera_info)
# Demo with default grayscale camera images
robot.camera.color_image_enabled = False
demo_camera_exposure(robot)
# Demo with color camera images
robot.camera.color_image_enabled = True
demo_camera_exposure(robot)
cozmo.robot.Robot.drive_off_charger_on_connect = False # Cozmo can stay on his charger for this example
cozmo.run_program(cozmo_program, use_viewer=True, force_viewer_on_top=True)
Fin
相關文章
- Cozmo人工智慧機器人SDK使用筆記(9)-判斷部分if_this_then_that人工智慧機器人筆記
- Cozmo人工智慧機器人SDK使用筆記(1)-基礎部分basics人工智慧機器人筆記
- Cozmo人工智慧機器人SDK使用筆記(2)-顯示部分face人工智慧機器人筆記
- Cozmo人工智慧機器人SDK使用筆記(8)-應用部分apps人工智慧機器人筆記APP
- Cozmo人工智慧機器人SDK使用筆記(6)-並行部分Parallel_Action人工智慧機器人筆記並行Parallel
- Cozmo人工智慧機器人SDK使用筆記(5)-時序部分async_sync人工智慧機器人筆記
- Cozmo人工智慧機器人SDK使用筆記(4)-任務部分cubes_and_objects人工智慧機器人筆記Object
- Cozmo人工智慧機器人SDK使用筆記(7)-補充說明人工智慧機器人筆記
- Vector人工智慧機器人SDK使用筆記人工智慧機器人筆記
- Anki Cozmo(Vector)人工智慧機器人玩具部分文件人工智慧機器人
- Cozmo人工智慧機器人SDK使用筆記(X)-總結- |人工智慧基礎(中小學版)實踐平臺|人工智慧機器人筆記
- Cozmo機器人體驗:好迷你的人工智慧玩具機器人人工智慧
- ROS2GO之手機連線Cozmo人工智慧機器人玩具ROSGo人工智慧機器人
- Cozmo機器人使用中文Scratch3程式設計案例(codelab)機器人程式設計
- 【機器視覺】教你選擇工業機器人視覺系統!視覺機器人
- 【機器視覺】FANUC機器人視覺功能詳解;智慧工廠對機器視覺有啥需求?視覺機器人
- 透過機器人應用視覺機器人視覺
- 機器人視覺引導系統機器人視覺
- CCD視覺上料、機器人擺盤、視覺擺盤視覺機器人
- 機器視覺學習筆記:臉性別識別視覺筆記
- Cozmo機器人脫離智慧手機使用的不完全攻略機器人
- 加入視覺:將計算機改造為機器人視覺計算機機器人
- 機器視覺之外,機器人的感知補全計劃視覺機器人
- ROS2GO+Cozmo=口袋機器人之人工智慧模擬和實驗平臺ROSGo機器人人工智慧
- 探索3D機器視覺|顯揚科技3D機器視覺在機場行李分揀中的應用3D視覺
- 人工智慧 (14) 計算機視覺人工智慧計算機視覺
- 工業機器人領域機器視覺的四個顯著功能機器人視覺
- 人工智慧與智慧系統3-> 機器人學3 | 移動機器人平臺人工智慧機器人
- 柔性振動盤 機器人上料 視覺振盤機器人視覺
- 柔性振動盤 機器人上料 視覺散料機器人視覺
- 【機器視覺】機器人及視覺檢測系統在螺絲檢測包裝生產線上的應用視覺機器人
- 機器視覺相比智慧生物視覺的區別視覺
- ABB機器人套介面通訊 機器人部分機器人
- 前端使用 Konva 實現視覺化設計器(3)前端視覺化
- INDEMIND亮相Vision China 2019,AI視覺助力機器人行業發展創新AI視覺機器人行業
- 機器人視覺系統中的嵌入式技術機器人視覺
- 用蘋果Vision Pro隔空操控機器人,英偉達:「人機合一」也不難嘛蘋果機器人
- 從穿戴計算到智慧機器人,立體視覺領域「黑馬」INDEMIND成長記機器人視覺