streaming_over_network_with_opencv_et_zeromq
Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédente | Dernière révisionLes deux révisions suivantes | ||
streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:14] – [Streaming over network with OpenCV et ZeroMQ] serge | streaming_over_network_with_opencv_et_zeromq [2022/02/25 13:15] – [Profondeur d'une RealSense D455] serge | ||
---|---|---|---|
Ligne 189: | Ligne 189: | ||
Le receiver est le même que ci-dessus. | Le receiver est le même que ci-dessus. | ||
- | ====Profondeur d'une RealSense D455==== | ||
- | Pour l' | ||
- | <code bash> | ||
- | cd / | ||
- | source mon_env/ | ||
- | python3 -m pip install | ||
- | </ | ||
- | A tester, je n'ai pas la cam ! | ||
- | <file python sender_rs_depth.py> | ||
- | import os | ||
- | import time | ||
- | import imagezmq | ||
- | import cv2 | ||
- | import numpy as np | ||
- | import pyrealsense2 as rs | ||
- | |||
- | class MyRealSense: | ||
- | """ | ||
- | def __init__(self): | ||
- | self.width = 1280 | ||
- | self.height = 720 | ||
- | self.pose_loop = 1 | ||
- | self.pipeline = rs.pipeline() | ||
- | config = rs.config() | ||
- | pipeline_wrapper = rs.pipeline_wrapper(self.pipeline) | ||
- | |||
- | try: | ||
- | pipeline_profile = config.resolve(pipeline_wrapper) | ||
- | except: | ||
- | print(' | ||
- | os._exit(0) | ||
- | |||
- | device = pipeline_profile.get_device() | ||
- | config.enable_stream( | ||
- | width=self.width, | ||
- | height=self.height, | ||
- | format=rs.format.bgr8, | ||
- | framerate=30) | ||
- | config.enable_stream( | ||
- | width=self.width, | ||
- | height=self.height, | ||
- | format=rs.format.z16, | ||
- | framerate=30) | ||
- | self.pipeline.start(config) | ||
- | self.align = rs.align(rs.stream.color) | ||
- | unaligned_frames = self.pipeline.wait_for_frames() | ||
- | frames = self.align.process(unaligned_frames) | ||
- | depth = frames.get_depth_frame() | ||
- | self.depth_intrinsic = depth.profile.as_video_stream_profile().intrinsics | ||
- | # Affichage de la taille des images | ||
- | color_frame = frames.get_color_frame() | ||
- | img = np.asanyarray(color_frame.get_data()) | ||
- | print(f" | ||
- | f" | ||
- | self.sender = imagezmq.ImageSender(connect_to=' | ||
- | time.sleep(2.0) | ||
- | |||
- | def run(self): | ||
- | """ | ||
- | while self.pose_loop: | ||
- | frames = self.pipeline.wait_for_frames(timeout_ms=80) | ||
- | # Align the depth frame to color frame | ||
- | aligned_frames = self.align.process(frames) | ||
- | self.depth_color_frame = aligned_frames.get_depth_frame() | ||
- | depth_gray_image = cv2.cvtColor(depth_color_image, | ||
- | # Convert 16bit data | ||
- | detph_gray_16bit = np.array(depth_gray_image, | ||
- | detph_gray_16bit *= 256 | ||
- | |||
- | self.sender.send_image(" | ||
- | |||
- | cv2.imshow(" | ||
- | if cv2.waitKey(1) == 27: | ||
- | break | ||
- | |||
- | if __name__ == ' | ||
- | mrs = MyRealSense() | ||
- | mrs.run() | ||
- | </ | ||
{{tag> | {{tag> |
streaming_over_network_with_opencv_et_zeromq.txt · Dernière modification : 2022/02/25 13:15 de serge