streaming_over_network_with_opencv_et_zeromq
Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
streaming_over_network_with_opencv_et_zeromq [2022/02/19 09:22] – [Profondeur d'une RealSense D455] serge | streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] (Version actuelle) – [Profondeur d'une OAK-D Lite] serge | ||
---|---|---|---|
Ligne 6: | Ligne 6: | ||
</ | </ | ||
+ | **Pas de latence, peu ce consommation CPU, en python super facile à implémenter**\\ | ||
+ | **mais pas de réception dans VLC et Pure Data.**\\ | ||
+ | Utiliser **[[streamer_des_images_opencv_avec_v4l2-loopback|Streamer des images OpenCV avec v4l2-loopback]]** | ||
=====ZeroMQ===== | =====ZeroMQ===== | ||
**[[https:// | **[[https:// | ||
Ligne 17: | Ligne 20: | ||
* [[https:// | * [[https:// | ||
- | =====Installation===== | + | =====Implémentation dans Pure Data===== |
+ | * https:// | ||
+ | |||
+ | ====Installation | ||
+ | sudo apt install multimedia-puredata libzmq3-dev | ||
+ | |||
+ | ====Compilation du patch zmq==== | ||
+ | git clone git@github.com: | ||
+ | cd pd-zmq | ||
+ | make | ||
+ | |||
+ | ====Utilisation des scripts fournis==== | ||
+ | Ça envoie et reçoit des int/string, mais pas d' | ||
+ | Extrait de https:// | ||
+ | < | ||
+ | LATER | ||
+ | * proper architecture workflows | ||
+ | ** multiconnects/ | ||
+ | * complex objects | ||
+ | ** [zmf_router] -- [broker] as abstraction? | ||
+ | ** [zmf_dealer] -/ | ||
+ | ** [zmf_pair] | ||
+ | * implement streams to send audio blocks | ||
+ | ** binary modes | ||
+ | * send/ | ||
+ | ** binary (for audio/video frames) | ||
+ | ** string (for communication w external programs) | ||
+ | </ | ||
+ | Le paquet de la première image est reçu, mais il ne passe passe rien ensuite ... | ||
+ | =====Installation du module python===== | ||
Dans un environnement virtuel python (3.9) | Dans un environnement virtuel python (3.9) | ||
Ligne 38: | Ligne 70: | ||
</ | </ | ||
+ | ====Lancement d'un script==== | ||
+ | |||
+ | cd / | ||
+ | ./ | ||
+ | | ||
+ | ====Sender avec python et receive dans pd==== | ||
+ | |||
+ | Bon, là je suis nul en pd ! | ||
=====Exemples===== | =====Exemples===== | ||
* Exemples inspirés de: https:// | * Exemples inspirés de: https:// | ||
Ligne 81: | Ligne 121: | ||
</ | </ | ||
- | ====Profondeur d'une OAK-D Lite==== | ||
- | <code bash> | ||
- | cd / | ||
- | source mon_env/ | ||
- | python3 -m pip install depthai numpy | ||
- | </ | ||
- | <file python sender_oak_depth.py> | ||
- | import time | ||
- | import imagezmq | ||
- | import cv2 | ||
- | import depthai as dai | ||
- | import numpy as np | ||
- | |||
- | sender = imagezmq.ImageSender(connect_to=' | ||
- | time.sleep(2.0) | ||
- | |||
- | pipeline = dai.Pipeline() | ||
- | # Define a source - two mono (grayscale) cameras | ||
- | left = pipeline.createMonoCamera() | ||
- | left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
- | left.setBoardSocket(dai.CameraBoardSocket.LEFT) | ||
- | right = pipeline.createMonoCamera() | ||
- | right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
- | right.setBoardSocket(dai.CameraBoardSocket.RIGHT) | ||
- | # Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way) | ||
- | depth = pipeline.createStereoDepth() | ||
- | depth.setConfidenceThreshold(200) | ||
- | |||
- | # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default) | ||
- | median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7 # For depth filtering | ||
- | depth.setMedianFilter(median) | ||
- | |||
- | # Better handling for occlusions: | ||
- | depth.setLeftRightCheck(False) | ||
- | # Closer-in minimum depth, disparity range is doubled: | ||
- | depth.setExtendedDisparity(False) | ||
- | # Better accuracy for longer distance, fractional disparity 32-levels: | ||
- | depth.setSubpixel(False) | ||
- | |||
- | left.out.link(depth.left) | ||
- | right.out.link(depth.right) | ||
- | |||
- | # Create output | ||
- | xout = pipeline.createXLinkOut() | ||
- | xout.setStreamName(" | ||
- | depth.disparity.link(xout.input) | ||
- | |||
- | with dai.Device(pipeline) as device: | ||
- | device.startPipeline() | ||
- | # Output queue will be used to get the disparity frames from the outputs defined above | ||
- | q = device.getOutputQueue(name=" | ||
- | |||
- | while True: | ||
- | inDepth = q.get() | ||
- | frame = inDepth.getFrame() | ||
- | frame = cv2.normalize(frame, | ||
- | # Convert depth_frame to numpy array to render image in opencv | ||
- | depth_gray_image = np.asanyarray(frame) | ||
- | # Resize Depth image to 640x480 | ||
- | resized = cv2.resize(depth_gray_image, | ||
- | sender.send_image(" | ||
- | cv2.imshow(" | ||
- | if cv2.waitKey(1) == 27: | ||
- | break | ||
- | </ | ||
- | Le receiver est le même que ci-dessus. | ||
- | |||
- | ====Profondeur d'une RealSense D455==== | ||
- | Pour l' | ||
- | <code bash> | ||
- | |||
- | </ | ||
- | pyrealsense2 | ||
- | |||
- | <file python sender_rs_depth.py> | ||
- | |||
- | </ | ||
- | {{tag> | + | {{tag>zmc opencv pd pure-data pure_data python sb}} |
streaming_over_network_with_opencv_et_zeromq.1645262538.txt.gz · Dernière modification : 2022/02/19 09:22 de serge