Multithreading

Hi, I am looking for a way to use multithreading with SOLIDWORKS API. Common answer is that it can’t be done, but I did find a 15-year old forum post with a comment from SW partner saying there is a way, and linking to a document that describes how to do it, but the link inside it (presumably to a PDF named DOC-1604 “How to make calls crossing thread boundaries in SolidWorks C# add-in”), is dead. Does anyone have this document saved somewhere, perhaps..?

My use case is running body operations via SOLIDWORKS API (move, cut, make some measurements, save them, and repeat) hundreds or thousands of times. None of them modify the document or need to have anything saved to it. So I had another idea - if my app is an external application, it could start up multiple instances of SW in the background, each in it’s own thread, have each of them open the same document (with read-only permissions), and run these body operations independent from one another, in parallel. This would be fairly difficult to implement, and I’m not even sure if this is technically possible with SW, perhaps someone here already tried it before and could give any insights?

Running multiple SW could work.

It won’t be an “addin”. It is a program that run and manage each SW instance.

It need to make sure there is enough memory to start a new SW instance.

2 Likes

Yeah, I know. Although, now that I think about it, I don’t see why an add-in running inside SW couldn’t launch additional instances of SW via System.Activator.CreateInstance, and have some logic to detect whether it is launched as a main (driving) or secondary (driven) instance. In any case, it still feels dirty, and I wonder if there is a better way.

I suspect that firing up multiple instances of SW has little chance of speeding things up much. Can you explain what you are trying to accomplish? Perhaps body operations aren’t the only way to do what you want.

It’s an old project I have, ship buoyancy simulation software, which works by cutting the model in half to measure the “underwater” part volume, compare it with the target (weight), and adjust the cutting surface (waterline) up or down until both numbers match. It is a lot more complicated than I described here, there are additional forces, constrains, etc., but in essence that’s how it works. Think of the classical SW Design Study to optimize the height of the water in a bottle to reach target volume. Although the algorithm I have is very optimized, it still needs 10-15 guesses on a fresh (untrained) model, each of which takes at least a few seconds, and if user needs calculating hundreds of scenarios, that takes a very long time, especially with complex bodies. To my knowledge, there is no other way with SW API to measure how much volume of a solid body there is below and above a certain plane, except for cutting it. So my idea is to “outsource” these cut and volume measure operations (or at least different user-defined scenarios) into different threads, run in parallel, to speed things up.

Since hull shape doesn’t change, you can build a table and reuse the number.

Start guess at a much closer level.

I’m guessing you use a “binary search”.

If you build a table for each inch of hull, the guess will start within an inch of the solution.

How much CPU does you program use now?

Single body model or multibody?

If you haven’t looked into using the IModeler to utilize temporary bodies, I highly recommend doing so, especially before looking at multiple instances of SW. The Boolean operations on that are noticeably quicker than using the built in features and should allow you to iterate much more quickly.

Yes, I am already doing both of these things. Whenever any cut is performed, it is saved in a data table for other cuts, either by interpolating or extrapolating from existing samples. This does reduce solve time by a lot the more user continues testing the same model, but can be tricky when model geometry is changed substantially. And yes, whenever program is idle, it can be set to perform ”probing” in background, building a table so that real simulations go faster when user needs them. Nevertheless, it still takes a lot of time with complex models or after making big changes to geometry.

Single core, close to 100%.

Both :slight_smile:

Have been doing this since I first created the program. Yes, it is a lot faster than features, but still slow when dealing with complex bodies containing hundreds of faces and non-analytical surfaces.

1 Like

Update after model change won’t get any faster. Unless you can split it into multi-thread.

With 100% on single core you can get some gains.

The way model changes are currently handled is that when model is changed, the program still tries to use the old data points to interpolate/extrapolate for new cuts, and if new data points are close enough in position but different in value from the old ones, old ones are overwritten. If they are completely different (meaning the model has changed too much for the old data to be useful), the entire old data is discarded and rebuilt from scratch to avoid confusing the algorithm.

But yeah, I feel like I did all the optimizations possible on single core, maybe it can be improved a little, but not significantly. Multi-threading would help a lot, though… If it can be done. Which brings me back to the original question in this topic :slight_smile:

Have you looked at FeatureManager::PreIntersect2? It will give you an array of bodies as a result of intersecting, and you can use a plane as the intersection to effectively split the model. The bodies in the array are temp bodies, but you can select them and get mass properties, all without having created any features.

A quick and dirty test using a single body (an inverted cone, 1m in dia, 2m long) doing 20 pre-intersect operations (0.1 m apart) and calculating the mass and volume of each resulting body took about 18 secs.

I think I have tried that one, and found it to be a bit slower than the one I’m currently using - Body:Operations2, which works on temporary bodies. I think both are based on the same underlying code (IModeler), but FeatureManager places additional overhead on the same operations. Or maybe that’s because PreIntersect2 requires to select the cutting plane in model space, rather than having plane’s pointer passed to that call… And we all know how slow selecting stuff in SW is. Unless I’m misremembering something.

You mentioned your program is a stand alone. I recall reading thread in old SW forum and possibly a blog article about the performance differences of in-process vs stand-alone. I had to do a little searching to check myself and the wording I used. I think I found the blog or one like it here: https://blog.codestack.net/solidworks-stand-alone-performance And another similar article: https://www.codestack.net/solidworks-api/getting-started/inter-process-communication/invoke-add-in-functions/in-process-invoking/
Both from Xarial author @artem Taturevych.

Also a cadoverflow thread where @AmenJlili categorically answered this age old multithread question: https://www.cadoverflow.com/t/can-i-use-multi-threading-with-solidworks-api/287/2 but also suggested your idea of running multiple instances of SW.

Also @peterbrinkhuis helpful list of ways to boost API speeds which has been helpful to me. https://cadbooster.com/improve-solidworks-macro-speed-10x/

I hope it’s ok to drop links and mention these helpful folks.

It’s over my head, but my layman grasp is calling SW APIs from an add-in will usually be faster than calling from stand-alone process. While increasing efficiency is not achieving asynchronous performance, it can make a big difference.

1 Like

Thank you, these are great links. I’ve actually never seen the first two, although I knew the general point (add-ins are faster than stand-alones).

However, there is a catch. There are two components to how much time API call takes: the time to make the call, and for SW to complete the operation behind the call. The first one is dependent on what is making the call (add-in, standalone or macro), and the second one is done by SW internally. Making thousands of “lightweight” calls will show a big difference in performance between add-in and standalone. Making a few “heavy” calls (such as cutting bodies), will show minimal difference. I did test this. Add-in was faster, but only by a small margin, not really appreciable by the user. I guess my case is just a bit different in this regard. The most consequential bottle-neck is still how fast SW can make these heavy body operations, and that is down to the Parasolid kernel and it’s single-threaded nature.

I still have a feeling that something can be done about multithreading. In that link I posted in my first message, 15 years ago one of SW Partners said that multi-threading can be achieved, and gave a link to that doc that shows how to do it, but the link is dead, and the document seems to be lost. I requested it from SW Support… Maybe they can dig it up.

Yep, that makes sense. 50% reduction in 10% of the problem isn’t a big gain. I was a bit afraid of that.

I’m curious, are you running this then editing the hull, then run again. Pretty much a nested iterative process, with the inner iteration being what you’re automating to calculate draft then manually editing model to optimize as the outer iteration? You said the process is much more complicated than you described, I might be over simplifying.

Edit, You mentioned with your optimized model you have about 10-15 seconds to get convergence of the displacement mass. But then user may need to do that 100s of times, That’s what I was curious about, is there user input between each of those 100s or are those what can be parallel?