Kubla Central

Full Version: Performance Issues
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Good morning all, 

I'm having some performance issues with Kubla Cubed, but not the usual sort. It's not that the program's using too many resources on a limited machine but rather that it's not using enough. I've got a pretty impressive laptop that I'm using. It's handled everything that I've ever thrown at it except Kubla. 

Laptop specs: 
Dell Vostro 15 7510
Processor: 11th Gen Intel® Core™ i7-11800H @ 2.30GHz[Cores 8] [Logical processors 16]
RAM: 16GB RAM
Graphics: integrated graphics + NVIDIA GeForce RTX 3050 Ti Laptop (Kubla is forced to the standalone GPU)

I get that the RAM could be a little better, but here's the thing: When I start running Kubla nothing gets maxed out. If I close out of everything else, the processor usage drops to around 15% and doesn't really spike above around 30%. The RAM usage will stay around 12GB, and the GPU doesn't go nuts either. Yet it takes 30 minutes sometimes to render the existing contours on a single sheet and it's always run this slow. What gives? Any help would be greatly appreciated.

The attached screenshot is what my task manager performance tab looks like after Kubla's been sitting and working for ~20min after I clicked the "contours" tab to edit the existing contours on one sheet. Nothing else is running beyond normal background processes.
(02-24-2023, 03:41 PM)MadScientist67 Wrote: [ -> ]Good morning all, 

I'm having some performance issues with Kubla Cubed, but not the usual sort. It's not that the program's using too many resources on a limited machine but rather that it's not using enough. I've got a pretty impressive laptop that I'm using. It's handled everything that I've ever thrown at it except Kubla. 

Laptop specs: 
Dell Vostro 15 7510
Processor: 11th Gen Intel® Core™ i7-11800H @ 2.30GHz[Cores 8] [Logical processors 16]
RAM: 16GB RAM
Graphics: integrated graphics + NVIDIA GeForce RTX 3050 Ti Laptop (Kubla is forced to the standalone GPU)

I get that the RAM could be a little better, but here's the thing: When I start running Kubla nothing gets maxed out. If I close out of everything else, the processor usage drops to around 15% and doesn't really spike above around 30%. The RAM usage will stay around 12GB, and the GPU doesn't go nuts either. Yet it takes 30 minutes sometimes to render the existing contours on a single sheet and it's always run this slow. What gives? Any help would be greatly appreciated.

The attached screenshot is what my task manager performance tab looks like after Kubla's been sitting and working for ~20min after I clicked the "contours" tab to edit the existing contours on one sheet. Nothing else is running beyond normal background processes.

Hi MadScientist67

Thanks for posting your stats; it is very useful to get an idea about what is going on. We are aware of the performance issues with large datasets, especially in the contour and break-line viewer. To cut a long story short, triangulation-based volume calculators are arguably the most accurate, but they aren't the most performant as they use all the data. Therefore, the program was not initially designed to work with large datasets, and the importer/editor struggles with >40,000 points even on the fastest machines. However, as the program has grown in popularity, we recognize this requirement, and there are many performance enhancements we have on the roadmap.

In this scenario, I think the program is doing a single-threaded task, so it mainly only uses one of the cores on your CPU (8 main, 8 hyper thread). Many of the laptop's resources can't be used in this task. It is something we looked at quite recently to see if there were any "quick wins." We found one potential one, but the solution is a ground-up rewrite of this editor that we plan to do. It is currently built on Microsoft WPF, which has many great UI features but also some areas that are not optimal, such as graphics performance. Since Kubla Cubed was launched, new technologies are more performant, and Kubla Cubed will be upgraded to those.

For now, the best way forward is to try to reduce the size of the dataset if possible. A large amount of data will also result in long calculation times; the calculation engine is far more optimized than the contour importer but still will be taxed by large datasets.

I assume you imported these contour lines. In Kubla Cubed 2023, you will be able to crop the contours at the import stage, which can help considerably with this. Often, the contours will extend far outside the project's extents and slow down the editor unnecessarily.

Another thing you can do in the program is to "Simplify" the contours by reducing the number of points used to define the contours. This can be done in the program's "Line Reduction" wizard. As a last resort, you can export the existing surface as gridded data at a reasonable resolution and import it back. One tactic is to use this low-resolution mesh for working and then, when it's time for the final calculations, import back the high-density one.

So, in conclusion, there is nothing wrong with your settings/laptop; the issue is related to a poorly performing graphics routine that we plan to upgrade on the roadmap. Your NVIDIA GeForce RTX 3050 Ti will come in really handy for Kubla Cubed 2023, which has some more advanced 3D rendering. Feel free to send us the project so we can take a look and maybe give some tips on how to simplify the existing data.
(02-28-2023, 07:28 PM)Ted Woods Wrote: [ -> ]
(02-24-2023, 03:41 PM)MadScientist67 Wrote: [ -> ]Good morning all, 

I'm having some performance issues with Kubla Cubed, but not the usual sort. It's not that the program's using too many resources on a limited machine but rather that it's not using enough. I've got a pretty impressive laptop that I'm using. It's handled everything that I've ever thrown at it except Kubla. 

Laptop specs: 
Dell Vostro 15 7510
Processor: 11th Gen Intel® Core™ i7-11800H @ 2.30GHz[Cores 8] [Logical processors 16]
RAM: 16GB RAM
Graphics: integrated graphics + NVIDIA GeForce RTX 3050 Ti Laptop (Kubla is forced to the standalone GPU)

I get that the RAM could be a little better, but here's the thing: When I start running Kubla nothing gets maxed out. If I close out of everything else, the processor usage drops to around 15% and doesn't really spike above around 30%. The RAM usage will stay around 12GB, and the GPU doesn't go nuts either. Yet it takes 30 minutes sometimes to render the existing contours on a single sheet and it's always run this slow. What gives? Any help would be greatly appreciated.

The attached screenshot is what my task manager performance tab looks like after Kubla's been sitting and working for ~20min after I clicked the "contours" tab to edit the existing contours on one sheet. Nothing else is running beyond normal background processes.

Hi MadScientist67

Thanks for posting your stats; it is very useful to get an idea about what is going on. We are aware of the performance issues with large datasets, especially in the contour and break-line viewer. To cut a long story short, triangulation-based volume calculators are arguably the most accurate, but they aren't the most performant as they use all the data. Therefore, the program was not initially designed to work with large datasets, and the importer/editor struggles with >40,000 points even on the fastest machines. However, as the program has grown in popularity, we recognize this requirement, and there are many performance enhancements we have on the roadmap.

In this scenario, I think the program is doing a single-threaded task, so it mainly only uses one of the cores on your CPU (8 main, 8 hyper thread). Many of the laptop's resources can't be used in this task. It is something we looked at quite recently to see if there were any "quick wins." We found one potential one, but the solution is a ground-up rewrite of this editor that we plan to do. It is currently built on Microsoft WPF, which has many great UI features but also some areas that are not optimal, such as graphics performance. Since Kubla Cubed was launched, new technologies are more performant, and Kubla Cubed will be upgraded to those.

For now, the best way forward is to try to reduce the size of the dataset if possible. A large amount of data will also result in long calculation times; the calculation engine is far more optimized than the contour importer but still will be taxed by large datasets.

I assume you imported these contour lines. In Kubla Cubed 2023, you will be able to crop the contours at the import stage, which can help considerably with this. Often, the contours will extend far outside the project's extents and slow down the editor unnecessarily.

Another thing you can do in the program is to "Simplify" the contours by reducing the number of points used to define the contours. This can be done in the program's "Line Reduction" wizard. As a last resort, you can export the existing surface as gridded data at a reasonable resolution and import it back. One tactic is to use this low-resolution mesh for working and then, when it's time for the final calculations, import back the high-density one.

So, in conclusion, there is nothing wrong with your settings/laptop; the issue is related to a poorly performing graphics routine that we plan to upgrade on the roadmap. Your NVIDIA GeForce RTX 3050 Ti will come in really handy for Kubla Cubed 2023, which has some more advanced 3D rendering. Feel free to send us the project so we can take a look and maybe give some tips on how to simplify the existing data.
Thanks, Ted. Glad to know that it's not something that I've caused. I did manage to figure out how to use the "lines are dashed" feature when importing the contours to reduce the number of lines in the dataset. That did a lot to help with performance. I'm excited for Kubla Cubed 2023.