0

Helix and Kinect

Anonymous 10 years ago 0
This discussion was imported from CodePlex

jmontoya wrote at 2012-03-23 13:45:

I am using Helix to display a point cloud from kinect it is working very well. I have 2 questions. First can I chnage the color for each point? do you have any registration tools? I want to combine multiple clouds to create a 3D solid. Thanks in advance.


objo wrote at 2012-04-03 01:34:

I have been thinking about creating a similar demo, showing the depth image as a mesh and using the rgb image for the texture map...

Are you using the PointsVisual3D to display the point cloud? This only supports a single color, but can be extended to support a material containing a 'palette' of colors (then you can set different colors for each point by texture coordinates). Setting different material to each point will be too slow.


jmontoya wrote at 2012-04-03 02:26:

Yes I am using PointsVisual3D to display the point cloud and it is working well. I am very interested in the approach you are suggesting but I don't know where to start. I am kind of new to 3D but I am trying to learn from your very good samples. If you decide to create your sample I will be checking and if you think there is something I can help with please let me know.

If I display a single cloud, it works well but when I try to display all the frames coming from kinect it just takes over the CPU.


objo wrote at 2012-04-03 21:54:

I think you could create a subclass of PointsVisual3D to get what you need.

Set the Model.Material = new DiffuseMaterial(new ImageBrush(image)) where image is coming from the kinect.

Then override the UpdateGeometry method and set the Mesh.TextureCoordinates for all points in your cloud (remember to copy 6 times (number of vertices in 2 triangles) for each point).

Are you adding 640x480 = 307200 points to the PointsVisual3D? That will for sure be very cpu intensive!


objo wrote at 2012-04-08 13:22:

I added a small kinect example (Examples/Kinect/DepthSensorDemo). All code is in the MainWindow code-behind (should be easy to refactor to a view-model). The demo reads the depth and color data and creates an image material and a triangular mesh (triangles containing too far/too near points are not included, as well as triangles where the depth range exceeds a given limit). The transform from depth data to 3D points can be improved, it would be interesting to get the correct scale (e.g. in meter or millimeter). Do you have a better solution on how to transform the depth data to 3D?

I let the CompositionTarget.Rendering event control how often the depth and color data is updated, I am getting ok refresh rates even on the 640x480 depth mode.


pyrrhicpk wrote at 2012-11-06 04:52:

jmontoya wrote:

Yes I am using PointsVisual3D to display the point cloud and it is working well. I am very interested in the approach you are suggesting but I don't know where to start. I am kind of new to 3D but I am trying to learn from your very good samples. If you decide to create your sample I will be checking and if you think there is something I can help with please let me know.

If I display a single cloud, it works well but when I try to display all the frames coming from kinect it just takes over the CPU.

 

Hi,

Can you drop in some sample code how to display the point cloud using PointsVisual3D please? I am trying to display it using MeshBuilder.AddSphere however, it freezes the UI. I have the points in x,y,z and I just want to display them in helixviewport3d. How can I do it using PointsVisual3D? Please advise me.

Thanks