Ceci est une ancienne révision du document !
Table des matières
Streaming over network with OpenCV et ZeroMQ
Z = Zero
M = Messaging
Q = Queuing
Pas de latence, peu ce consommation CPU, en python super facile à implémenter
ZeroMQ
zeromq.org (également écrit ØMQ, 0MQ ou ZMQ) est une bibliothèque de messagerie asynchrone haute performance
Ressources
- Live video streaming over network with OpenCV and ImageZMQ FFMPEG ou GStreamer sont des options, mais c'est galère à implémenter.
- How to send OpenCV video footage over ZeroMQ sockets? sur stackoverflow.com
Installation
Dans un environnement virtuel python (3.9)
- requirements.txt
opencv-python imagezmq
sudo apt install python3-pip python3 -m pip install --upgrade pip sudo apt install python3-venv cd /le/dossier/de/votre/projet python3 -m venv mon_env source mon_env/bin/activate python3 -m pip install -r requirements.txt ou python3 -m pip install opencv-python imagezmq
Exemples
- Exemples inspirés de: https://github.com/jeffbass/imagezmq/tree/master/examples
Caméra
Le principe est simple, sender envoie “image”, c'est une image en np.array
Cet array peut être définit par ce que vous voulez.
Les exemples utilisent souvent imutils, qui est une surcouche en python sur OpenCV, et qui a quelques bugs. On peut s'en passer facilement, il suffit de lire la doc OpenCV, par exemple pour retailler les images, les convertir en jpg etc …
- sender_cam.py
import time import imagezmq import cv2 sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555') my_name = "moi" cap = cv2.VideoCapture(2) time.sleep(2.0) while 1: # image peut venir de n'importe quoi ! # ici, c'est pour une caméra ret, image = cap.read() if ret: cv2.imshow("moi", image) sender.send_image(my_name, image) print("send:", image.shape) if cv2.waitKey(10) == 27: break
- receiver_cam.py
import cv2 import imagezmq image_hub = imagezmq.ImageHub() while 1: your_name, image = image_hub.recv_image() print(your_name, image.shape) cv2.imshow(your_name, image) image_hub.send_reply(b'OK') if cv2.waitKey(10) == 27: break
Profondeur d'une OAK-D Lite
cd /le/dossier/de/votre/projet source mon_env/bin/activate python3 -m pip install depthai numpy
- sender_oak_depth.py
import time import imagezmq import cv2 import depthai as dai import numpy as np sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555') time.sleep(2.0) pipeline = dai.Pipeline() # Define a source - two mono (grayscale) cameras left = pipeline.createMonoCamera() left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) left.setBoardSocket(dai.CameraBoardSocket.LEFT) right = pipeline.createMonoCamera() right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) right.setBoardSocket(dai.CameraBoardSocket.RIGHT) # Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way) depth = pipeline.createStereoDepth() depth.setConfidenceThreshold(200) # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default) median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7 # For depth filtering depth.setMedianFilter(median) # Better handling for occlusions: depth.setLeftRightCheck(False) # Closer-in minimum depth, disparity range is doubled: depth.setExtendedDisparity(False) # Better accuracy for longer distance, fractional disparity 32-levels: depth.setSubpixel(False) left.out.link(depth.left) right.out.link(depth.right) # Create output xout = pipeline.createXLinkOut() xout.setStreamName("disparity") depth.disparity.link(xout.input) with dai.Device(pipeline) as device: device.startPipeline() # Output queue will be used to get the disparity frames from the outputs defined above q = device.getOutputQueue(name="disparity", maxSize=4, blocking=False) while True: inDepth = q.get() # blocking call, will wait until a new data has arrived frame = inDepth.getFrame() frame = cv2.normalize(frame, None, 0, 255, cv2.NORM_MINMAX) # Convert depth_frame to numpy array to render image in opencv depth_gray_image = np.asanyarray(frame) # Resize Depth image to 640x480 resized = cv2.resize(depth_gray_image, (640, 480), interpolation = cv2.INTER_AREA) sender.send_image("moi", resized) cv2.imshow("disparity", resized) if cv2.waitKey(1) == 27: break
Le receiver est le même que ci-dessus.
Profondeur d'une RealSense D455
Pour l'installation, voir https://github.com/sergeLabo/grande_echelle#installation
cd /le/dossier/de/votre/projet source mon_env/bin/activate python3 -m pip install pyrealsense2 numpy
A tester, je n'ai pas la cam !
- sender_rs_depth.py
import os import time import imagezmq import cv2 import numpy as np import pyrealsense2 as rs class MyRealSense: """Initialise la caméra et permet d'accéder aux images""" def __init__(self): self.width = 1280 self.height = 720 self.pose_loop = 1 self.pipeline = rs.pipeline() config = rs.config() pipeline_wrapper = rs.pipeline_wrapper(self.pipeline) try: pipeline_profile = config.resolve(pipeline_wrapper) except: print('\n\nPas de Capteur Realsense connecté\n\n') os._exit(0) device = pipeline_profile.get_device() config.enable_stream( rs.stream.color, width=self.width, height=self.height, format=rs.format.bgr8, framerate=30) config.enable_stream( rs.stream.depth, width=self.width, height=self.height, format=rs.format.z16, framerate=30) self.pipeline.start(config) self.align = rs.align(rs.stream.color) unaligned_frames = self.pipeline.wait_for_frames() frames = self.align.process(unaligned_frames) depth = frames.get_depth_frame() self.depth_intrinsic = depth.profile.as_video_stream_profile().intrinsics # Affichage de la taille des images color_frame = frames.get_color_frame() img = np.asanyarray(color_frame.get_data()) print(f"Taille des images:" f" {img.shape[1]}x{img.shape[0]}") self.sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555') time.sleep(2.0) def run(self): """Boucle infinie, quitter avec Echap dans la fenêtre OpenCV""" while self.pose_loop: frames = self.pipeline.wait_for_frames(timeout_ms=80) # Align the depth frame to color frame aligned_frames = self.align.process(frames) self.depth_color_frame = aligned_frames.get_depth_frame() depth_gray_image = cv2.cvtColor(depth_color_image, cv2.COLOR_RGB2GRAY) # Convert 16bit data detph_gray_16bit = np.array(depth_gray_image, dtype=np.uint16) detph_gray_16bit *= 256 self.sender.send_image("moi", detph_gray_16bit) cv2.imshow("disparity", detph_gray_16bit) if cv2.waitKey(1) == 27: break if __name__ == '__main__': mrs = MyRealSense() mrs.run()