For bugs and new features, use the issue tracker located at GitHub.
Also try the chat room!
Help Calculating Normals - Sample Code?
GilbertF wrote at 2012-10-03 22:10:
first thanks for this amaizing tool , i am using mesh builder to create an object made of quads i am passing to the function four points in counter clock order, this is a laser scanner and i am reading contour to recreated an object, now i have data and an object but it looks bumpy or facet not smooth i need to calculate normals to make it look better, any help wil be appreciated.
thanks
Gilbert figueroa
orlando, fl
objo wrote at 2012-10-03 22:41:
You don't have to calculate normals, just share vertices for adjacent panels where you want a smooth surface. You will not share vertices when you use the AddQuad method, try to use AddRectangularMesh (if your surface is a 'rectangular mesh') or set the positions and triangle indices yourself!
GilbertF wrote at 2012-10-13 07:49:
Hi Objo
thanks for your last replay i still stock trying to get a smooth surface im trying to use the addrectangularmesh but no results i was wondering where i can find sample code of the type of data i need to pass to the method it saids it is IList<Point3D>? sample code or sample will be appreciated.
thanks
gilbert
objo wrote at 2012-10-14 15:54:
Use "Find usages" function (Shift-F12?) in Visual Studio to find two examples of AddRectangularMesh. The points should be specified 'row by row' and you have to specify the number of 'columns'. Note that the front side of the geometry is defined by the order of the points (reversing the order should reverse the normals).
GilbertF wrote at 2012-10-14 22:54:
I'm building an application that needs to detect the contour or shape of a hand or foot. This application uses a laser to draw a line on a surface, a hand in this case. We move the laser to a position, then take a snapshot with a camara, analyze the image to detect the X,Y coordinates of the detected red laser line. Then we command the laser to advance a certain distance to a new 'row' (Y coordinate).
What we end up with is a List of rows, technically a List<List<Point3D>>. Our image might be 380 cols by 480 rows. But this list does not contains all the 380x480 points, it just contains the X,Y,Z coordinates of the 'red' laser pixels it detected. As we are advancing the laser 5 or 10 rows at a time we most rows are missing in this collection. Also there can be undetected pixels in each of the rows. So what we have is not a complete collection of X,Y coordinates. And each row might be a different size, etc.
In order to display a more realistic image of the scanned object I'm trying to take this collection of points and plot a smooth surface that interpolates all the missing points from the pixel points we collected from the images.
Below is an excerpt of the different approaches I have taken so far.
First I create the meshBuilder:
MeshBuilder meshBuilder = new MeshBuilder(true, false);
Then we create our points collection and fill it with the data vaules:
List<List<Point3D>> pointsList = new List<List<Point3D>>();
[some code here to fill in the collection]
A- This approach plots a series of small cubes that resemble our scaned hand:
foreach (List<Point3D> pointsRow in pointsList)
foreach (Point3D point in pointsRow)
meshBuilder.AddBox(point, 1, 1, 1);
So that indicates that our points collection is not that far off.
So I'm trying to create a mesh with this points collection by doing this:
Approach #1: (where matrix.maxCols is the width of our image)
meshBuilder.CreateNormals = true;
foreach (List<Point3D> pointsRow in pointsList) {
meshBuilder.AddRectangularMesh(pointsRow, matrix.maxCols);
}
Approach #2: (this will feed a maxCols x maxRows points array with the undertemined values = (0,0,0)
Point3D[,] pointsArray = new Point3D[matrix.maxCols, matrix.maxRows];
foreach (List<Point3D> pointsRow in pointsList)
foreach (Point3D point in pointsRow)
pointsArray[(int)point.X, (int)point.Y] = point;
meshBuilder.AddRectangularMesh(pointsArray);
None of these 2 approaches seems to be working for me and I can't figure out what I'm doing wrong here.
I think it complains about not having the correct number of Normals.
Thanks in advance!
how to read triangle indices from imported model
jasna100 wrote at 2014-05-20 14:34:
Any ideas and suggestions?
thanks
Beginners Question
Patpop01 wrote at 2014-03-22 08:41:
I just downloaded and was trying my first program.
But I have some issues...
I have written following code
<Window x:Class="MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:helix="http://helixtoolkit.codeplex.com"
Title="Prova" Height="378" Width="605"
WindowStartupLocation="CenterScreen">
<Grid>
<helix:HelixView3D IsViewportHitTestVisible="True">
<helix:HelixView3D.Camera>
<PerspectiveCamera LookDirection="-10,-10,-10" Position="10,10,10" UpDirection="0,0,1"/>
</helix:HelixView3D.Camera>
</helix:HelixView3D>
</Grid>
</Window>
But I am always getting errors stating : __The name "HelixView3D" does not exist in the namespace "http://helixtoolkit.codeplex.com".__
__The tag 'HelixView3D' does not exist in XML namespace 'http://helixtoolkit.codeplex.com'.__
The type 'helix:HelixView3D' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built.
The attachable property 'Camera' was not found in type 'HelixView3D'.
In the project props I added a reference to HelixToolkit and alle HelixToolkit.wpf items.
I am using VS Express 2012 and have the latest Helix 3D Toolkit.
When I use the code of (Wiki Link: [discussion:459092] ) the code builds, but I still get the
__The name "HelixViewport3D" does not exist in the namespace "http://helixtoolkit.codeplex.com".__ error.
Any ideas?
Thx
Mrme wrote at 2014-03-25 10:27:
xmlns:helix="clr-namespace:HelixToolkit.Wpf;assembly=HelixToolkit.Wpf"
Patpop01 wrote at 2014-03-26 06:43:
However I finally got it working...
First I tried the install the Nuget on VS2012 express... but that didn't work.
I downloaded the source files using the SVN link on the documentation/install page and tried to build, but I always got an error stating the helixtoolkit.csproj was not compatible with my current version.
So Uninstalled the VS2012 and installed the VS2013 express.
Still the same problem when trying to build myself...
But under VS2013 the Nuget worked. Since then I am enjoying this toolkit.
Now my code does not give me any warning or error.
Thx
objo wrote at 2014-04-29 10:45:
tdiethe wrote at 2014-05-01 13:06:
objo wrote at 2014-05-05 14:57:
Just need a quick overview
Rogad wrote at 2013-11-30 16:32:
I'm just starting with WPF and am wanting to work on animating a 3D avatar with morph targets probably. At this stage I am not even sure if WPF is the way to go, but it looks interesting. Previously I have been working on a 2D system that switches frames around to get an animation, but it's a bit limited so I am looking at 3D.
Basically my avatar with be a human head or similar and I would want to animate mouth movements, eyes and things like blinks/smiles. I have the head already built and all the morph targets too.
So my question is, do you think WPF + Helix 3D is going to be worth attempting for my project idea please ?
I see some of your documentation is still in progress. How busy is the project ? I only ask as I fear I may have a lot of questions !
Also in my mind I have an idea forming on a simple morph target system (read that as I found some tutorials and am now ahead of myself!) so I would need to manipulate individual points on a model. This sounds interesting to me to at least try and learn something as I do not know much about coding for 3D manipulation.
Does Helix turn say an .OBJ model into XAML ? How would you control the points/vertices and so on are just a couple of the questions in my mind.
Anyway I have rambled enough, I'd appreciate any of your experienced thoughts on this. Many thanks for your time :)
P.S. on Stackoverflow is where I heard of you...
http://stackoverflow.com/questions/3127753/displaying-3d-models-in-wpf
One of the people there mentioned a tool called 'Deep Exploration' - but I have not been able to find it anywhere on Google, do you happen to know anything about it ?
Thanks !
objo wrote at 2013-12-02 19:52:
- Yes, I think you should try WPF3D first, then consider DirectX if performance is not good enough or you need more control over the gpu (shaders, physics etc).
I like WPF3D because it is so simple and well integrated with WPF, and performance is good enough for most of my needs. - Documentation is high priority, but I don't have any available time to write it myself at the moment...
- Yes, this library can read most .obj models and create standard WPF GeometryModel3D objects. You can modify the vertices of the MeshGeometry3D after the model has been loaded
- I recommend stackoverflow for general WPF 3D questions, and this forum only for questions about the toolkit. stackoverflow also contains a lot of questions related to this toolkit too, see http://stackoverflow.com/questions/tagged/helix-3d-toolkit?sort=newest
- Sorry I don't know anything about Deep Exploration (but google shows: SAP Visual Enterprise Author (formerly Deep Exploration from Right Hemisphere))
Rogad wrote at 2013-12-03 20:07:
I've had a play with a couple of the demos in Visual Studio 2013. I noticed some did not work, but that could well be me at fault. The two I was really interested in work fine.
is there any way to serialize geometries to hard disk?
behnam263 wrote at 2014-07-21 08:19:
behnam263 wrote at 2014-07-27 10:06:
Viewport2DVisual3D Transparency problem
milosgregor wrote at 2013-08-25 01:51:
I have problem with the transparency of Viewport2DVisual3D objects. As you see on picture the Viewport2DVisual3D between well PK-1 and PK-2 have the same background color as the background color of HelixViewport3D (white).
the Visual of Viewport2DVisual3D consist from canvas in that are plotted only polygons. The background of canvas and material of Viewport2DVisual3D are set to transparent.
Why is not visible the part of section between well PK-3 and PK-2? How to set it? Here is code for creating the section between wells:
Dim we As New Viewport2DVisual3D
Dim mesh = New MeshGeometry3D()
Dim pos As New Point3DCollection
pos.Add(New Point3D(Well1.X, Well1.Y, Zmax))
pos.Add(New Point3D(Well1.X, Well1.Y, Zmin))
pos.Add(New Point3D(Well2.X, Well2.Y, Zmin))
pos.Add(New Point3D(Well2.X, Well2.Y, Zmax))
mesh.TriangleIndices = New Int32Collection(New Integer() {0, 1, 2, 0, 2, 3})
mesh.TextureCoordinates = New PointCollection(New Point() {New Point(0, 0), New Point(0, 1), New Point(1, 1), New Point(1, 0)})
mesh.Positions = pos
we.Geometry = mesh
Dim material = New DiffuseMaterial(New SolidColorBrush(Colors.Transparent))
Viewport2DVisual3D.SetIsVisualHostMaterial(material, True)
we.Material = material
Dim SectionCanvas As New Canvas
SectionCanvas.Width = (Math.Sqrt(((Well2.X - Well1.X) * (Well2.X - Well1.X)) + ((Well2.Y - Well1.Y) * (Well2.Y - Well1.Y)))) * ScalingFactor
SectionCanvas.Height = (Zmax - Zmin) * ScalingFactor
SectionCanvas.Background = New SolidColorBrush(Colors.Transparent)
For geo As Integer = 0 To Sections3DDefinition(i).Section.SectionSegments.Count - 1
Dim pol As New System.Windows.Shapes.Polygon
pol.Stroke = New SolidColorBrush(Colors.Black)
pol.ToolTip = Sections3DDefinition(i).Section.SectionSegments(geo).GeoID & " - " & Sections3DDefinition(i).Section.SectionSegments(geo).GeoName
pol.StrokeThickness = 1
pol.Fill = imgBrush
For k As Integer = 0 To Sections3DDefinition(i).Section.SectionSegments(geo).TopPoints.Count - 1
pol.Points.Add(New Point(Sections3DDefinition(i).Section.SectionSegments(geo).TopPoints(k).X * SectionCanvas.Width, Sections3DDefinition(i).Section.SectionSegments(geo).TopPoints(k).Y * SectionCanvas.Height))
Next
For k As Integer = 0 To Sections3DDefinition(i).Section.SectionSegments(geo).BottomPoints.Count - 1
pol.Points.Add(New Point(Sections3DDefinition(i).Section.SectionSegments(geo).BottomPoints(k).X * SectionCanvas.Width, Sections3DDefinition(i).Section.SectionSegments(geo).BottomPoints(k).Y * SectionCanvas.Height))
Next
Next
we.Visual = SectionCanvas
my3D.Children.Add(we)
objo wrote at 2013-09-05 08:14:
Also note that the sections must be depth-sorted to get transparency (almost) right when rotating the model.
Performance
stfx wrote at 2012-03-03 07:44:
Hey there, long time no see ;)
You have been doing some great work here though I didnt check the code in detail yet.
One thing that has always been an issue though, not just with this awesome toolkit but rather WPF 3D in general, is performance (atleast compared to directx). Thats why we really need to optimize the code.
Like in HelixViewport3D.cs you should remove the this.Viewport.Children.Contains check in the Add(), Clear(), Remove() functions as this is something which should be checked by the overlaying code. In fact I would even remove them completely since we already have access to Viewport.Children and can add/remove/clear them just as easy (in windowsforms.listview you also add the items like listview1.items.add(...) and not listview1.add(...)).
Also it might we worth considering in general which code should be in the library and which could be added to the examples. For example the headlight seems like a really seldom used feature which slows the viewer down a bit. Probably not noticable but still why not add an handler for camerachanged and export it to the example code. Although I guess its easy enough for anyone to remove it from the code if they so wish.
There are probably more places where performance could be optimized - maybe its even possible that the models that use the same models could somehow not redundantly save the mesh .
Thanks and keep up the good work
EDIT: Just out of curiosity how do you compare the performance of the first revs to the current one? Also is your screen line implementation also quite slow compared to usual DirectX line drawings? This was always bothering me with WPF actually. Btw were there any improvements in .net 4.0 for WPF 3D?
EDIT2: I need to have a scene with many models and emissive simple meshes (sphere, box, plane) which can be seen on a screen here: http://code.google.com/p/freelancermodstudio/ which need to be selected (display a bounding box) and then translated, scaled, rotated. I know for a fact that its possible with this toolkit but Would you say using directX approach might be more suitable as in performance?
objo wrote at 2012-03-10 16:02:
Thanks for the feedback! I think you are right, the Add,Clear and Remove methods are not needed. See changelist a79030ea7c0e.
I have not run a profiler on this library lately, but I don't think the 'headlight' feature has any noticable performance impact.
Using the HelixViewport3D to calculate the total number of triangles will have a performance impact, so this should be used with care.
The line implementation (LinesVisual3D) in this library is of course slow compared to using DirectX LineList primitive, but it should be faster that the other two WPF3D implementations shown in the PointsAndLinesDemo.
If you need to draw a lot of lines (more than 1000 segments) or more control of the shaders, I think you should go for DirectX. The SlimDX library includes a WPF example!
stfx wrote at 2012-03-11 00:23:
Thanks for your answer.
Btw before I forget:
- HelixViewport3D:OnApplyTemplate() ... last assert is incorrectly the same as the previous one
- Viewport3DHelper:FindNearest() ... the 5 lines after
// first transform the Model3D hierarchy
- Also Resharper notices some small coding issues which might be worth looking into
Right now I am struggling to correctly use the LinesVisual3D class (cant seem to find out the correct points). I was delighted when I first noticed the BoundingBoxVisual3D.cs file thinking that this one would create those screen space lines around an object until I found out that its just some tubes being aranged like a boundingbox :D
objo wrote at 2012-03-11 20:46:
OnApplyTemplate fixed, thanks!
Let me know if you find the bug in Viewport3DHelper:FindNearest!
Yes, I try to keep the Resharper/Stylecop status on 'green'. Viewport3DHelper and HelixViewport3D seems to be ok, but I am sure there are warnings in other classes. Please report if you see some bugs.
Yes, BoundingBoxVisual3D is using cylinders - I made that class before making the LinesVisual3D. Should add a property where you can select between lines or cylinders.
stfx wrote at 2012-03-11 22:23:
Well commenting out the 5 lines after that comment fixed the FindNearest function for me ;)
Hm I would rather replace the functionality in BoundingBoxVisual3D as I think nobody would use cylinders for something like that :P
Other than that I only found some code styling issues which dont affect anything like why are your comments in the functions in MeshBuilder.cs commented out twice? Also in MeshBuilder.AddTriangeStrip() the wikipedia link exists twice.
objo wrote at 2012-04-05 02:08:
Could the error be in GetTransform(Visual3D, Model3D)? Can you create a small example model where the current implementation fails?
I have tested with the stick figure in ExportDemo (this contains lots of Model3D transforms), and it seems to work ok here (double click on the 'fingers', and you see the target point will be set correctly).
Bounding box with lines - I added this issue http://helixtoolkit.codeplex.com/workitem/9950
Thanks, the MeshBuilder wiki-link comments will be fixed in the next code push. I use "////" when I don't want resharper to reformat comment 'figures'.
peterthesaint wrote at 2014-07-25 12:44:
It is my first post here. Thanks for your great work !!!
Previously in my model I was using ScreenLinesVisual3d.
Now I noticed new Helix version allows LinesVisual3d and I was wondering whether you have implemented changes suggested in this link:
http://www.ericsink.com/wpf3d/1_ScreenSpaceLines3D_Performance.html
Regard,
Peter
Kinect Panorama Demo
rakelmasuta wrote at 2014-03-13 00:35:
But, i can't found the source of integration between kinect and panorama demo.
Can you help me, please.
objo wrote at 2014-04-29 10:56:
rakelmasuta wrote at 2014-05-19 18:02:
here's the code :
~ mainwindow.xaml
<Window x:Class="PanoramaDemo.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ht="clr-namespace:HelixToolkit.Wpf;assembly=HelixToolkit.Wpf" Title="TA Rakel"
xmlns:k="http://schemas.microsoft.com/kinect/2013"
Height="480" Width="640">
<Grid>
<ht:HelixViewport3D x:Name="view1" ShowViewCube="False" ShowCameraTarget="False" CameraMode="FixedPosition" RotationSensitivity="0.6">
<ModelVisual3D>
<ModelVisual3D.Content>
<AmbientLight Color="White"/>
</ModelVisual3D.Content>
</ModelVisual3D>
<ht:PanoramaCube3D Source="Models\Opera\"/>
</ht:HelixViewport3D>
<k:KinectSensorChooserUI HorizontalAlignment="Center" VerticalAlignment="Top" Name="sensorChooserUi" />
<k:KinectUserViewer VerticalAlignment="Top" HorizontalAlignment="Center" k:KinectRegion.KinectRegion="{Binding ElementName=kinectRegion}" Height="100" UserColoringMode="Manual" />
<k:KinectRegion Name="kinectRegion">
<Grid>
<k:KinectTileButton Label="Press me!" Click="ButtonOnClick" VerticalAlignment="Top" Margin="28,0,0,0" HorizontalAlignment="Left"></k:KinectTileButton>
<k:KinectCircleButton Label="Circle" HorizontalAlignment="Right" Height="200" VerticalAlignment="Top" Click="ButtonOnClick" >Hi</k:KinectCircleButton>
<k:KinectScrollViewer VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" VerticalAlignment="Bottom">
<StackPanel Orientation="Horizontal" Name="scrollContent" />
</k:KinectScrollViewer>
</Grid>
</k:KinectRegion>
</Grid>
</Window>
~ mainwindow.xaml.cs// --------------------------------------------------------------------------------------------------------------------
// <copyright file="MainWindow.xaml.cs" company="Helix 3D Toolkit">
// http://helixtoolkit.codeplex.com, license: MIT
// </copyright>
// --------------------------------------------------------------------------------------------------------------------
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Media.Media3D;
using System.Windows.Navigation;
using System.Windows.Shapes;
using Microsoft.Kinect;
using Microsoft.Kinect.Toolkit;
using Microsoft.Kinect.Toolkit.Controls;
using Microsoft.Kinect.Toolkit.Interaction;
namespace PanoramaDemo
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
private KinectSensorChooser sensorChooser;
public MainWindow()
{
InitializeComponent();
Loaded += OnLoaded; /// kinect Loaded
var camera = view1.Camera as PerspectiveCamera;
camera.Position = new Point3D(0, 0, 0);
camera.LookDirection = new Vector3D(0, 1, 0);
camera.UpDirection = new Vector3D(0, 0, 1);
camera.FieldOfView = 120;
}
/// <summary>
/// Deteksi Status kinect
/// </summary>
private void OnLoaded(object sender, RoutedEventArgs routedEventArgs)
{
this.sensorChooser = new KinectSensorChooser();
this.sensorChooser.KinectChanged += SensorChooserOnKinectChanged;
this.sensorChooserUi.KinectSensorChooser = this.sensorChooser;
this.sensorChooser.Start();
}
private void SensorChooserOnKinectChanged(object sender, KinectChangedEventArgs args)
{
bool error = false;
if (args.OldSensor != null)
{
try
{
args.OldSensor.DepthStream.Range = DepthRange.Default;
args.OldSensor.SkeletonStream.EnableTrackingInNearRange = false;
args.OldSensor.DepthStream.Disable();
args.OldSensor.SkeletonStream.Disable();
}
catch (InvalidOperationException)
{
// KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
// E.g.: sensor might be abruptly unplugged.
error = true;
}
}
if (args.NewSensor != null)
{
try
{
args.NewSensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
args.NewSensor.SkeletonStream.Enable();
}
catch (InvalidOperationException)
{
error = true;
// KinectSensor might enter an invalid state while enabling/disabling streams or stream features.
// E.g.: sensor might be abruptly unplugged.
}
}
if (!error)
kinectRegion.KinectSensor = args.NewSensor;
}
/// <summary>
/// Untuk event on click button
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void ButtonOnClick(object sender, RoutedEventArgs e)
{
MessageBox.Show("Well done!");
}
}
}
How to control rotation of panorama image and make zoom for it by kinect region?objo wrote at 2014-05-19 21:45:
Please fork and create a pull request to get your code merged into the Kinect examples.
rakelmasuta wrote at 2014-05-20 00:47:
Problems with textures in .obj files
soheilvb wrote at 2012-03-28 05:05:
Hey guys ,
tnx for this toolkit , that's really great .
I'm having problems with texture of some object files that i extracted using a script from unity ..
these are two sample of object files : (Direct link)
http://parsaspace.com/files/9680754884/objects.zip.html
3ds max open these files perfectly . but Meshlab can't load the texture .
i also, loaded these files in 3ds max and exported them to .3ds format and loaded them with toolkit. there was the same problem !
i'm using the the last version of your component . (Compiled from last source code)
i have another question :
I want to load for example 2000 object files into one scene. I loaded them from object files into one Helixviewport . that takes about 20 second and 820 meg of memory and that's slow and not smooth ... ( in my quad core 3.4 imac).
anyway to optimize this ?
tnx for your time .
sorry for poor english .
soheilvb wrote at 2012-03-28 12:25:
for the texture problem. i was using the previous version . then i use the new version and i see everything is white !!!
then comment out these :
if (this.AmbientMap == null)
{
// var ambientBrush = new SolidColorBrush(this.Ambient) { Opacity = this.Dissolved };
// mg.Children.Add(new EmissiveMaterial(ambientBrush));
}
else
and now everything is goood.
also for 3ds format, i think you forgot that change
var textureBrush = new ImageBrush(img);
with
var textureBrush = new ImageBrush(img) { ViewportUnits = BrushMappingMode.Absolute, TileMode = TileMode.Tile };
. this solved the problem .
I still have the second problem though , tnx
objo wrote at 2012-04-05 01:28:
thanks for the files and the bug report, I will look into this soon!
objo wrote at 2012-04-05 23:14:
I applied the two bug fixes! Thanks! Your example .obj files look ok now.
objo wrote at 2012-04-05 23:17:
are you loading 2000 different models? If not, you should freeze and share the geometry for all models that should look the same.
soheilvb wrote at 2012-04-06 13:17:
yes. 2000 different models. i guess WPF 3D can't manage this amount of objects. i should probably go to Directx and slimdx.
tnx for your response.
objo wrote at 2012-04-08 13:15:
Try the latest version of the obj reader, it should have better performance when using smoothing groups. I also profiled the code and found that the most expensive operation was splitting the input string by a regular expression. I changed it into a compiled expression, but maybe there are faster ways to solve this..
See also http://msdn.microsoft.com/en-us/library/bb613553.aspx
Did you try adding all models into a Model3DGroup? That should be more efficient than adding 2000 Visual3Ds.
soheilvb wrote at 2012-04-08 14:42:
tnx
i read that msdn article multiple times .
I used single Model3DGroup method before. there was no improvement in performance at all.
i have about 40 textures and many meshes use the same material. i'm thinking about merging the meshes that have the same material into larger meshes but there is two problem:
1. i don't know how to merge them and does it really effect the performance?
2. i need every model clickable .
i used Simplify method in Meshgeometryhelper too. but there was some errors ...
there is ultimately about 900,000 vertices in all the models.
tnx a lot ..
soheilvb wrote at 2012-04-08 15:02:
Ohhhhhh myyyyy gooooood !!
i just tried your last version ...
the ram decreased from 800 meg to 300 meg.
time of importing objects decreased to half ...
i'm really grateful .
i think the problem was with the meshbuilder that you fixed. am i right?
now if i share material, that's gonna effect the performance?
The RhinoDemo, 4 windows but the same HelixViewport3D
HDNguyen wrote at 2012-03-13 20:33:
Hi Objo
I am studying Helix and find it very good. I try also to extend the example RhinoDemo at the first place but without success. Could you help me ?
As you know the RhinoDemo has actually 4 windows with 4 not related HelixViewport3D. What I try to extend is the left window that contain the main HelixViewport3D and should show the HelixViewport3D with a FrontView, the right window shows the BackView, the bottom left shows the LeftView and the bottom right shows the RightView but of the same HelixViewport3D
Is it possible with WPF 3D in general and specifically with Helix ? What should we do so that we can have 2 or more windows show the same Viewport3D but with different camera look direction and camera positions ?
govert wrote at 2012-03-13 22:26:
You're seeing a place where the WPF design and the Helix Toolkit design does not fit together so well. I'd also love to hear Objo's response, but can give some background (filled with a painful amount of jargon).
In WPF there is a Visual tree, and every Visual (say a BoxVisual3D) can appear only once in the Visual tree. A Viewport3D (which has the Camera) is itself a Visual, and will display its Visual3D children, so those Visual3D children can appear in at most one Viewport3D. So if we want to see the same 'scene' from different Camera positions (in different Viewport3Ds), the different Cameras have to be looking at _different_ ModelVisual3Ds.
Now, different ModelVisual3Ds can display the same Model3D. This makes me prefer an approach which puts the geometry construction on the Model3D side, rather than to have the Visual3Ds build the mesh, as happens in the various Visual3D classes in the current Helix Toolkit.
For example, say you want a draw a Box that will have some fixed dimensions (Length x Width x Height) and you want to see it in different Viewport3Ds (like in the different RhinoDemo views). Then you certainly need to have at least one Visual3D of your box for each Viewport3D (say we have four Viewport3Ds) - this is just how WPF works. But the current design in the Helix Toolkit might lead you to make a BoxVisual3D with the right dimensions, which can then only be displayed once (in one Viewport3D). So you'll end up making four BoxVisual3Ds, each with the same dimensions etc., but representing a single box in the 'real' world.
One might prefer to have the four Visual3Ds share a single Model3D (probably a GeometryModel3D with a MeshGeometry, which could be textured etc.) That way there is one shared Box model, displayed through the four Visual3Ds. This would put the Mesh generation code (currently in the Tesselate methods of the XXXVisual3Ds) on something like a BoxModel3D (though you I don't think you can derive from GeometryModel3D that way - so maybe BoxModelBuilder3D) . This is to generate the meshes once for the 'real' Box object and then have them 'display' through different Visual3Ds - each Visual3D might now be a simple Visual3D, and no longer a model-specific Visual. Adding and using the interactivity of the UIElement3D can be tricky, because you now have to push interactions into the Model3D if they are to display in the different views. (Imagine you want to select one side of your Box).
So - this is a tricky issue, and the BoxVisual3D etc. design in Helix Toolkit at the moment doesn't really make this kind of multi-view interface easy. By moving the tesselation code out of the Visual3Ds, you'll be able to make a single Model3D (or rather a few Model3DGroups, each with many Models and various transforms) that are 'displayed' by a few very simple Visual3Ds into the different Viewport3Ds. The granularity of your Model / Visual breakdown might be determined by what kind of interactivity you need to support.
I think the current Helix Toolkit Model / Visual design with the mesh being built in the Visual3D came from an early sample by one of the WPF3D developers, and it might have some other advantages like being able to instantiate those shapes in XAML. Here is one discussion from around the time WPF3D was implemented, http://blogs.msdn.com/b/danlehen/archive/2005/10/09/478923.aspx, with the follow-up example here http://blogs.msdn.com/b/danlehen/archive/2005/10/16/481597.aspx. But, with due respect, I think the ModelVisual3D-derived primitives approach is problematic, and in particular it makes something like the RhinoDemo very clumsy to implement.
-Govert
objo wrote at 2012-03-14 00:32:
That's a great explanation from Govert. Thanks!
There should be no difference between using a Viewport3D and a HelixViewport3D, the same limitations apply. Use ModelVisual3D elements (not the Visual3D types in Helix toolkit) if you want to share the Model3D between different visual elements. And remember to Freeze() the Model3Ds.
The RhinoDemo was mostly an experiment to create a user interface that looked close to the original, and define as much as possible of the user interface in XAML.
HDNguyen wrote at 2012-03-14 13:03:
Thank a lot, govert and objo
The explanation of govert is great. As i understand, for short, since a Viewport3D can have only 1 camera so when we want to get several views from different Camera positions, we must create the same amount of Viewport3Ds, so the problem is how to synchronize them
What do you think about the strategy: In the sample RhinoDemo I will grab and clone the children of the main Viewport3D, say the top right window, and add them as children to the Viewport3D of the other windows, another problem is also to override the methode OnVisualChildrenChanged(DependencyObject visualAdded, DependencyObject visualRemoved) to get them synchronized
Wait to hear your opinions
Thanks
Customer support service by UserEcho