Point Cloud using Helix Toolkit

Anonymous 9 years ago 0
This discussion was imported from CodePlex

Luka1211 wrote at 2013-10-02 20:50:


i am new to c# and WPF, and i am trying to use helix toolkit to represent a kinect depth image as a point cloud. i am trying to load Points3DCollection into PointsVisual3D as it is explained in the source for the pointsandlines example. It keeps returning the message that it cannot implicitly convert points3DCollection to pointsvisual3D. Is there an example just for creating point clouds or just points using the mentioned methods?


abuseukk wrote at 2013-12-25 18:54:


I am also trying to create a point cloud for the human body using Kinect and Helix 3d. For now there is a sample in the examples directory which I think has some mistakes in converting depth to 3d mesh cordinates. I found this link useful :

Kinect Stackoverflow
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame()) {
  DepthImagePixel[] depth = new DepthImagePixel[depthFrame.PixelDataLength];
  SkeletonPoint[] realPoints = new SkeletonPoint[depth.Length];


  CoordinateMapper mapper = new CoordinateMapper(sensor);
  mapper.MapDepthFrameToSkeletonFrame(DEPTH_FORMAT, depth, realPoints);
If someone can help me with converting this cordinates into correct Point3d Cordinates :)

AnotherBor wrote at 2014-01-07 05:33:

Luka1121, PointsVisual3D has a Points property to which you should assign your collection of points.

Abuseukk, you should probably decide to use the depth data as one of the coords of the points. new Point3D(xcoord, ycoord, depth[i]) would spread the points in Z axis accordingly to their depth.