streaming_over_network_with_opencv_et_zeromq
Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédente | |||
streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] – [Profondeur d'une RealSense D455] serge | streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] (Version actuelle) – [Profondeur d'une OAK-D Lite] serge | ||
---|---|---|---|
Ligne 120: | Ligne 120: | ||
break | break | ||
</ | </ | ||
- | |||
- | ====Profondeur d'une OAK-D Lite==== | ||
- | <code bash> | ||
- | cd / | ||
- | source mon_env/ | ||
- | python3 -m pip install depthai numpy | ||
- | </ | ||
- | |||
- | <file python sender_oak_depth.py> | ||
- | import time | ||
- | import imagezmq | ||
- | import cv2 | ||
- | import depthai as dai | ||
- | import numpy as np | ||
- | |||
- | sender = imagezmq.ImageSender(connect_to=' | ||
- | time.sleep(2.0) | ||
- | |||
- | pipeline = dai.Pipeline() | ||
- | # Define a source - two mono (grayscale) cameras | ||
- | left = pipeline.createMonoCamera() | ||
- | left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
- | left.setBoardSocket(dai.CameraBoardSocket.LEFT) | ||
- | right = pipeline.createMonoCamera() | ||
- | right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P) | ||
- | right.setBoardSocket(dai.CameraBoardSocket.RIGHT) | ||
- | # Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way) | ||
- | depth = pipeline.createStereoDepth() | ||
- | depth.setConfidenceThreshold(200) | ||
- | |||
- | # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default) | ||
- | median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7 # For depth filtering | ||
- | depth.setMedianFilter(median) | ||
- | |||
- | # Better handling for occlusions: | ||
- | depth.setLeftRightCheck(False) | ||
- | # Closer-in minimum depth, disparity range is doubled: | ||
- | depth.setExtendedDisparity(False) | ||
- | # Better accuracy for longer distance, fractional disparity 32-levels: | ||
- | depth.setSubpixel(False) | ||
- | |||
- | left.out.link(depth.left) | ||
- | right.out.link(depth.right) | ||
- | |||
- | # Create output | ||
- | xout = pipeline.createXLinkOut() | ||
- | xout.setStreamName(" | ||
- | depth.disparity.link(xout.input) | ||
- | |||
- | with dai.Device(pipeline) as device: | ||
- | device.startPipeline() | ||
- | # Output queue will be used to get the disparity frames from the outputs defined above | ||
- | q = device.getOutputQueue(name=" | ||
- | |||
- | while True: | ||
- | inDepth = q.get() | ||
- | frame = inDepth.getFrame() | ||
- | frame = cv2.normalize(frame, | ||
- | # Convert depth_frame to numpy array to render image in opencv | ||
- | depth_gray_image = np.asanyarray(frame) | ||
- | # Resize Depth image to 640x480 | ||
- | resized = cv2.resize(depth_gray_image, | ||
- | sender.send_image(" | ||
- | cv2.imshow(" | ||
- | if cv2.waitKey(1) == 27: | ||
- | break | ||
- | </ | ||
- | Le receiver est le même que ci-dessus. | ||
{{tag> | {{tag> |
streaming_over_network_with_opencv_et_zeromq.txt · Dernière modification : 2022/02/25 13:15 de serge