streaming_over_network_with_opencv_et_zeromq
Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédenteDernière révisionLes deux révisions suivantes | ||
streaming_over_network_with_opencv_et_zeromq [2022/02/19 08:32] – [imageZMQ: Transporting OpenCV images] serge | streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] – [Profondeur d'une RealSense D455] serge | ||
---|---|---|---|
Ligne 6: | Ligne 6: | ||
</ | </ | ||
- | + | **Pas de latence, peu ce consommation CPU, en python super facile à implémenter**\\ | |
- | + | **mais pas de réception dans VLC et Pure Data.**\\ | |
+ | Utiliser **[[streamer_des_images_opencv_avec_v4l2-loopback|Streamer des images OpenCV avec v4l2-loopback]]** | ||
=====ZeroMQ===== | =====ZeroMQ===== | ||
**[[https:// | **[[https:// | ||
- | |||
* https:// | * https:// | ||
* https:// | * https:// | ||
- | |||
- | |||
- | |||
- | |||
- | * [[https:// | ||
- | |||
- | |||
=====Ressources===== | =====Ressources===== | ||
- | * **[[https:// | + | * **[[https:// |
* **[[https:// | * **[[https:// | ||
* https:// | * https:// | ||
+ | * [[https:// | ||
- | =====Installation===== | + | =====Implémentation dans Pure Data===== |
+ | * https:// | ||
+ | |||
+ | ====Installation | ||
+ | sudo apt install multimedia-puredata libzmq3-dev | ||
+ | |||
+ | ====Compilation du patch zmq==== | ||
+ | git clone git@github.com: | ||
+ | cd pd-zmq | ||
+ | make | ||
+ | |||
+ | ====Utilisation des scripts fournis==== | ||
+ | Ça envoie et reçoit des int/string, mais pas d' | ||
+ | Extrait de https:// | ||
+ | < | ||
+ | LATER | ||
+ | * proper architecture workflows | ||
+ | ** multiconnects/ | ||
+ | * complex objects | ||
+ | ** [zmf_router] -- [broker] as abstraction? | ||
+ | ** [zmf_dealer] -/ | ||
+ | ** [zmf_pair] | ||
+ | * implement streams to send audio blocks | ||
+ | ** binary modes | ||
+ | * send/ | ||
+ | ** binary (for audio/video frames) | ||
+ | ** string (for communication w external programs) | ||
+ | </ | ||
+ | Le paquet de la première image est reçu, mais il ne passe passe rien ensuite ... | ||
+ | =====Installation du module python===== | ||
Dans un environnement virtuel python (3.9) | Dans un environnement virtuel python (3.9) | ||
Ligne 48: | Ligne 70: | ||
</ | </ | ||
- | =====Exemples | + | ====Lancement d'un script==== |
+ | |||
+ | cd / | ||
+ | ./ | ||
+ | |||
+ | ====Sender avec python et receive dans pd==== | ||
+ | |||
+ | Bon, là je suis nul en pd ! | ||
+ | =====Exemples===== | ||
+ | * Exemples inspirés de: https:// | ||
====Caméra==== | ====Caméra==== | ||
Le principe est simple, sender envoie " | Le principe est simple, sender envoie " | ||
- | Cet array peut être définit par ce que vous voulez. | + | Cet array peut être définit par ce que vous voulez.\\ |
+ | |||
+ | Les exemples utilisent souvent imutils, qui est une surcouche en python sur OpenCV, et qui a quelques bugs. On peut s'en passer facilement, il suffit de lire la doc OpenCV, par exemple pour retailler les images, les convertir en jpg etc ... | ||
<file python sender_cam.py> | <file python sender_cam.py> | ||
Ligne 59: | Ligne 92: | ||
sender = imagezmq.ImageSender(connect_to=' | sender = imagezmq.ImageSender(connect_to=' | ||
- | |||
my_name = " | my_name = " | ||
cap = cv2.VideoCapture(2) | cap = cv2.VideoCapture(2) | ||
time.sleep(2.0) | time.sleep(2.0) | ||
- | |||
while 1: | while 1: | ||
# image peut venir de n' | # image peut venir de n' | ||
# ici, c'est pour une caméra | # ici, c'est pour une caméra | ||
ret, image = cap.read() | ret, image = cap.read() | ||
- | | ||
if ret: | if ret: | ||
cv2.imshow(" | cv2.imshow(" | ||
sender.send_image(my_name, | sender.send_image(my_name, | ||
print(" | print(" | ||
- | |||
if cv2.waitKey(10) == 27: | if cv2.waitKey(10) == 27: | ||
break | break | ||
Ligne 83: | Ligne 112: | ||
image_hub = imagezmq.ImageHub() | image_hub = imagezmq.ImageHub() | ||
- | |||
while 1: | while 1: | ||
your_name, image = image_hub.recv_image() | your_name, image = image_hub.recv_image() | ||
print(your_name, | print(your_name, | ||
- | |||
cv2.imshow(your_name, | cv2.imshow(your_name, | ||
image_hub.send_reply(b' | image_hub.send_reply(b' | ||
- | |||
if cv2.waitKey(10) == 27: | if cv2.waitKey(10) == 27: | ||
break | break | ||
</ | </ | ||
+ | ====Profondeur d'une OAK-D Lite==== | ||
+ | <code bash> | ||
+ | cd / | ||
+ | source mon_env/ | ||
+ | python3 -m pip install depthai numpy | ||
+ | </ | ||
+ | <file python sender_oak_depth.py> | ||
+ | import time | ||
+ | import imagezmq | ||
+ | import cv2 | ||
+ | import depthai as dai | ||
+ | import numpy as np | ||
+ | sender = imagezmq.ImageSender(connect_to=' | ||
+ | time.sleep(2.0) | ||
+ | pipeline = dai.Pipeline() | ||
+ | # Define a source - two mono (grayscale) cameras | ||
+ | left = pipeline.createMonoCamera() | ||
+ | left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
+ | left.setBoardSocket(dai.CameraBoardSocket.LEFT) | ||
+ | right = pipeline.createMonoCamera() | ||
+ | right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
+ | right.setBoardSocket(dai.CameraBoardSocket.RIGHT) | ||
+ | # Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way) | ||
+ | depth = pipeline.createStereoDepth() | ||
+ | depth.setConfidenceThreshold(200) | ||
+ | # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default) | ||
+ | median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7 # For depth filtering | ||
+ | depth.setMedianFilter(median) | ||
+ | # Better handling for occlusions: | ||
+ | depth.setLeftRightCheck(False) | ||
+ | # Closer-in minimum depth, disparity range is doubled: | ||
+ | depth.setExtendedDisparity(False) | ||
+ | # Better accuracy for longer distance, fractional disparity 32-levels: | ||
+ | depth.setSubpixel(False) | ||
+ | left.out.link(depth.left) | ||
+ | right.out.link(depth.right) | ||
+ | # Create output | ||
+ | xout = pipeline.createXLinkOut() | ||
+ | xout.setStreamName(" | ||
+ | depth.disparity.link(xout.input) | ||
+ | |||
+ | with dai.Device(pipeline) as device: | ||
+ | device.startPipeline() | ||
+ | # Output queue will be used to get the disparity frames from the outputs defined above | ||
+ | q = device.getOutputQueue(name=" | ||
+ | |||
+ | while True: | ||
+ | inDepth = q.get() | ||
+ | frame = inDepth.getFrame() | ||
+ | frame = cv2.normalize(frame, | ||
+ | # Convert depth_frame to numpy array to render image in opencv | ||
+ | depth_gray_image = np.asanyarray(frame) | ||
+ | # Resize Depth image to 640x480 | ||
+ | resized = cv2.resize(depth_gray_image, | ||
+ | sender.send_image(" | ||
+ | cv2.imshow(" | ||
+ | if cv2.waitKey(1) == 27: | ||
+ | break | ||
+ | </ | ||
+ | Le receiver est le même que ci-dessus. | ||
- | {{tag> | + | {{tag>zmc opencv pd pure-data pure_data python sb}} |
streaming_over_network_with_opencv_et_zeromq.txt · Dernière modification : 2022/02/25 13:15 de serge