Outils pour utilisateurs

Outils du site


streaming_over_network_with_opencv_et_zeromq

Différences

Ci-dessous, les différences entre deux révisions de la page.

Lien vers cette vue comparative

Les deux révisions précédentesRévision précédente
streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] – [Profondeur d'une RealSense D455] sergestreaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] (Version actuelle) – [Profondeur d'une OAK-D Lite] serge
Ligne 120: Ligne 120:
         break         break
 </file> </file>
- 
-====Profondeur d'une OAK-D Lite==== 
-<code bash> 
-cd /le/dossier/de/votre/projet 
-source mon_env/bin/activate 
-python3 -m pip install depthai numpy 
-</code> 
- 
-<file python sender_oak_depth.py> 
-import time 
-import imagezmq 
-import cv2 
-import depthai as dai 
-import numpy as np 
- 
-sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555') 
-time.sleep(2.0) 
- 
-pipeline = dai.Pipeline() 
-# Define a source - two mono (grayscale) cameras 
-left = pipeline.createMonoCamera() 
-left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) 
-left.setBoardSocket(dai.CameraBoardSocket.LEFT) 
-right = pipeline.createMonoCamera() 
-right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) 
-right.setBoardSocket(dai.CameraBoardSocket.RIGHT) 
-# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way) 
-depth = pipeline.createStereoDepth() 
-depth.setConfidenceThreshold(200) 
- 
-# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default) 
-median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7 # For depth filtering 
-depth.setMedianFilter(median) 
- 
-# Better handling for occlusions: 
-depth.setLeftRightCheck(False) 
-# Closer-in minimum depth, disparity range is doubled: 
-depth.setExtendedDisparity(False) 
-# Better accuracy for longer distance, fractional disparity 32-levels: 
-depth.setSubpixel(False) 
- 
-left.out.link(depth.left) 
-right.out.link(depth.right) 
- 
-# Create output 
-xout = pipeline.createXLinkOut() 
-xout.setStreamName("disparity") 
-depth.disparity.link(xout.input) 
- 
-with dai.Device(pipeline) as device: 
-    device.startPipeline() 
-    # Output queue will be used to get the disparity frames from the outputs defined above 
-    q = device.getOutputQueue(name="disparity", maxSize=4, blocking=False) 
- 
-    while True: 
-        inDepth = q.get()  # blocking call, will wait until a new data has arrived 
-        frame = inDepth.getFrame() 
-        frame = cv2.normalize(frame, None, 0, 255, cv2.NORM_MINMAX) 
-        # Convert depth_frame to numpy array to render image in opencv 
-        depth_gray_image = np.asanyarray(frame) 
-        # Resize Depth image to 640x480 
-        resized = cv2.resize(depth_gray_image, (640, 480), interpolation = cv2.INTER_AREA) 
-        sender.send_image("moi", resized) 
-        cv2.imshow("disparity", resized) 
-        if cv2.waitKey(1) == 27: 
-            break 
-</file> 
-Le receiver est le même que ci-dessus. 
  
  
  
 {{tag>zmc opencv pd pure-data pure_data python sb}} {{tag>zmc opencv pd pure-data pure_data python sb}}
streaming_over_network_with_opencv_et_zeromq.txt · Dernière modification : 2022/02/25 13:15 de serge