Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct resize to prevent SIGSEGV on HDL64 S3. #328

Open
wants to merge 2 commits into
base: dashing-devel
Choose a base branch
from

Conversation

Chase-san
Copy link

Testing on a HDL64 S3, after much investigating, requires the vector to be larger to accommodate the larger amount of data. Otherwise a SIGSEGV occurs due to invalid memory access. This is only needed for the organized point cloud.

I am not aware if this change is needed for the HDL64 S2 or earlier devices, as I do not have access to these.

Testing on a HDL64 S3, after much investigating, requires the vector to be larger to accommodate the larger amount of data. Otherwise a SIGSEGV occurs due to invalid memory access.
@JWhitleyWork
Copy link
Contributor

@Chase-san - Is your HDL-64 S3 configured in dual-return mode?

@Chase-san
Copy link
Author

I do not think so, but when I have a chance to test it I will let you know.

@Chase-san
Copy link
Author

I set the #HDLRETS$ several command times to be sure, interspersed with power cycles. I was able to change the rpm with #HDLRPMxxxx$ command so I assume I managed to turn it off if it was on. However there is no change in the SIGSEGV without this fix.

Additionally, wouldn't the way the velodyne driver module only grabbing the needed packet amount prevent dual return from flooding the converter? I would think I would simply get a choppy point cloud, rather than a SIGSEGV.

@JWhitleyWork
Copy link
Contributor

Sorry for taking so long to get back to this, @Chase-san. I understand what you're doing here but I don't really understand the rationale behind your implementation. It doesn't seem to make sense that the S3 would provide twice the data that the S2.1 would nor that we would not have seen this before now with other users of the S3. Can you point to somewhere in the manual that would indicate that this is the case?

@Chase-san
Copy link
Author

This is the experimental result. Without this it simply segfaults. I am unsure why it is producing so much data, and it may very well be a packet generation or hardware issue.

I will get into contact with some of my colleagues and find out if they know more about this.

@JWhitleyWork
Copy link
Contributor

@Chase-san Any update?

@dawonn-haval
Copy link

It's in my queue to do a deeper dive into why this patch is required for the HDL63S3, just haven't found the time yet.

I will note that in addition to this patch, we have another patch that was required to fix the intensity values. I'll leave that here for now in case others are interested, but it will be a while before I can really dive in and give you a technical explanation. YMMV. :)

diff --git velodyne_pointcloud/include/velodyne_pointcloud/datacontainerbase.hpp velodyne_pointcloud/include/velodyne_pointcloud/datacontainerbase.hpp
index b0a82c2..ccea34a 100644
--- velodyne_pointcloud/include/velodyne_pointcloud/datacontainerbase.hpp
+++ velodyne_pointcloud/include/velodyne_pointcloud/datacontainerbase.hpp
@@ -124,7 +124,12 @@ public:
     cloud.height = config_.init_height;
     cloud.is_dense = static_cast<uint8_t>(config_.is_dense);
     cloud.row_step = cloud.width * cloud.point_step;
-    cloud.data.resize(scan_msg->packets.size() * config_.scans_per_packet * cloud.point_step);
+    int data_size = scan_msg->packets.size() * config_.scans_per_packet * cloud.point_step;
+    // additional space is required for HDL64 S3
+    if (!cloud.is_dense && cloud.width == 64) {
+      data_size = scan_msg->packets.size() * config_.scans_per_packet * cloud.point_step * 2;
+    }
+    cloud.data.resize(data_size);
     // Clear out the last data; this is important in the organized cloud case
     std::fill(cloud.data.begin(), cloud.data.end(), 0);
     if (config_.transform) {
diff --git velodyne_pointcloud/src/lib/rawdata.cpp velodyne_pointcloud/src/lib/rawdata.cpp
index a39ab55..573e45e 100644
--- velodyne_pointcloud/src/lib/rawdata.cpp
+++ velodyne_pointcloud/src/lib/rawdata.cpp
@@ -264,7 +264,7 @@ void RawData::unpack(const velodyne_msgs::msg::VelodynePacket & pkt, DataContain
         float focal_offset = 256.0f *
           (1.0f - corrections.focal_distance / 13100.0f) *
           (1.0f - corrections.focal_distance / 13100.0f);
-        float focal_slope = corrections.focal_slope;
+        float focal_slope = corrections.focal_slope / 10;
         intensity += focal_slope *
           (std::abs(focal_offset - 256.0f * square((1.0f - distance) / 65535.0f)));
         intensity = (intensity < min_intensity) ? min_intensity : intensity;

@JWhitleyWork
Copy link
Contributor

@Chase-san @dawonn-haval Any updates?

@dawonn
Copy link

dawonn commented Dec 5, 2022

I no longer use this hardware, but it was required back in the day when I was using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants