I’m trying to get a better understanding of how hardware and software work. I started experimenting with it, and even scanning the silly little owl that comes with it sometimes works great—other times, it gives me a really hard time. On the other hand, scanning something like a person’s face works right out of the box.
I got into 3D printing 10 years ago, so I’m used to working with technology that isn’t idiot-proof (yet). I know that practice and a deeper understanding of the tech will help me get better results.
What happens when I change the size, features, or object in the setup? What settings work best for different objects? I’m surprised that e.g. the owl has very distinct topology features, but setting it to texture gives me much better tracking.
I also can’t figure out how to optimize the exposure on both cameras or how tracking works. Sometimes I get a perfect image in both cameras (at least to my eyes), but the tracking software can’t find the object. Other times, it thinks it has found the object, but I end up with two offset/rotated overlays. How does it actually measure distance and track? It seems to have no distance sensor, like a lidar, just cameras. Is the exposure I set for the RGB camera also used for distance measuring? Does it use the 4 scan lenses only to measure distance, the RGB lens to measure color and then combine both + the IR for tracking?
I’ve also been experimenting with multi-point cloud merging, but I’m having a really hard time getting it to produce a good combined model—even when both point clouds seem to have a good number of distinct features. Even when I zoom all the way in and manually specify points that are really close together, the result still isn’t perfect and even just a 1% tilt or offset results in an unusable model
submitted by /u/HorstHorstmann12
[visit reddit] [comments]
Source link