Solidworks Mouse Gestures - Implemented Correctly?

So I thought that I’d pick on this topic from another thread mainly because of the way that Mouse Gestures have been implemented in Solidworks, IMO, have been done incorrectly. Now this mainly stems from using a wide range of 3D softwares that do have mouse gestures which may required explanation as to why they’ve been implemented in a “bad” way and what needs to be done to help improve them. Also they need to be come much more tightly integrated into the overall environment, not just reserved for initial function only. (will explain further about this below).

As a whole mouse gestures are meant to make is so that you can be on screen more and not have to go over to a menu, toolbar, or feature/property manager. The more that can be customized to your process/liking means a lot and this is an area that I find most CAD softwares to be lacking in comparison to their DCC counterparts.

First: the activation of what it is you want to do should NOT happen until you release the mouse button over the function that you want to do. There have been so many times that slightly touching the a function activates the incorrect thing in very large part that the inner circle/area/ given to move in is quite small.

Second: Property manager functionality - Let’s say you’re wanting to do a Revolve feature, which can be added to the mouse gesture wheel, but then you are forced to input everything into the property manager. Yes the property manager can be undocked and placed anywhere on the screen but this really defeats the purpose of using the whole screen rather than blocking it.

Third: Context tiers… we can have custom mouse gestures based upon if we’re in a sketch, part, assembly, or drawings. But this NEEDS to go so much further. Let’s say you’re in a sketch, want to extrude cut, and imagine that the amount of options within the property manager is placed as a second tier of the mouse gesture…this is a VERY crude video that would be an example of how the UI/UX would look… https://youtu.be/dLw8GXQ3Ok4.

The short is that the UI/UX of the software needs to be much more customizable. Sure I have a ton of hot keys and the likes but this falls far short of how far I’d really want to customize the way I want to operate within the software…thoughts?

After very limited time using SE, I agree 100%.

To me, gestures are a very inefficient UI method. The software has to collect a lot of data in the first place, and then a motion is far more complex than other types of input. Sometimes I just watch people using different UIs, for example a smart phone. Gestures are the one UI type that people tend to repeat frequently because the device/software misunderstood the input. Gestures with a mouse are even more difficult to get right than with your finger. I tend to use hotkeys or RMB menus instead, but these still require both customization and memorization.

One of my big complaints about the SW UI has been where they leave the focus in the propertymanager. Sometimes the focus for the keyboard is not in a place where you can key in a primary dimension, or the cursor focus is in a place that costs you an extra click or motion, or causes you to click something you never want to click. Sometimes they get it right, and sometimes they don’t. I’m not sure if the inconsistency is due to different project managers/quality people on projects and they aren’t really paying attention to that, or the fact that they just get lucky now and then.

To me, there are 3 different goals for setting up an expert level customizable interface - optimize graphics area, - optimize mouse travel/moving hands between devices, or - optimize mouse clicks/keyboard stokes. You can’t optimize all 3 simultaneously. For people who don’t really know the interface, or aren’t all-day-every-day kind of users, you also have to figure in “discoverability” - the ability to find what you’re looking for (assuming you know roughly what you’re looking for).

The danger of course is academic pedantry, and measuring things that are or become trivial over time. For example, when typing words, most people with a good typing proficiency don’t have to look at the keyboard to type, but when you’re typing just individual letters or numbers, even on 10-key, you have to look at least to initially position your fingers, and maybe for some less commonly used keys. Keystrokes as a metric can range from highly distracting to nearly irrelevant. Moving the focus of your eyes from monitor to keyboard might be a better metric. Moving your hand from the spaceball to the keyboard can become a highly muscle memory sort of action. You can also have a lot of keyboard functions customized to the spaceball device.

The Solid Edge interface is very efficient for a couple of reasons. First, they have removed all the text from it, but this assumes that everyone has the UI memorized. Second, their selection list is done visually with color rather than a text list of selected items. The problems with all of this is usually for newcomers. If you don’t understand the color scheme or if you need tooltips instead of the command labels, the interface can waste a lot of your time and feel frustrating. Plus, the SE interface tends to lead you down a particular path, while SW users are accustomed to a less structured workflow for commands. Also, SE interface assumes that you want to keep doing whatever you’re doing now. So if you’re making an extrude, SE will assume that when you’re done with that extrude, you’ll want to make another one, and it actually takes an additional step to do something different.

Which is to say that one man’s efficiency is another man’s clumsy trip point. A single interface is easy to learn, but cannot be optimized. A customizable interface is a different experience for each user.

@Matt I’m going to put you in the Super OG category of Solidworks user… :slight_smile: You’ve got it set up that works for you. But I think that’s part of what the post is about. That the overall UX/UI experience is still being pushed at us and there are many areas of inconsistencies through the software. For example WHY can we still double click to get out of a 2D sketch but not a 3D…this is a simple one but drives me CRAZY!!! I get it that the solver is different but this seems like an easy thing to code. So since they refuse to do this very simple thing after numerous times of making this suggestion I’ve had to add the green check/red X to my short cut bar or there’s also a hotkey to exit the 3D sketch…again if they’re not going to do it then at least give us the user the ability to.

So great example…Modo, which I use a lot, Not that you would ever want or have the need to use this software, but it’s a great example of at least allowing users to be in control of the UI/UX as they see fit.

Here’s a great vid showing a quick example show a very simple way to add property manager adjustments to the mouse and/or keyboard combinations -https://www.youtube.com/watch?v=3exuFW_3kus.

Now if you really want to talk about just being onscreen modeling, sure this is more of a direct modeling system than a parametric one, but I can say that I’ve got Modo leveraged in such a way that allows my modeling to be just as exact as Solidworks. https://www.youtube.com/watch?v=3exuFW_3kus

I will say that a good bit of my inefficiency comes from me. I haven’t taken the time to customize as much of the UI as a should, and even a good bit of what I have customized I forget to use. As someone said in a post last week, muscle memory can be a hard thing to break.

I gave up on the spaceball early on because my left hand was on they keyboard as much as it was on the spaceball. In hindsight I could have shelled out for the spaceball with more buttons and been more efficient in the long run.

So I don’t really have much room to complain about the SW shortcomings, because I could get most of the way there by changing me.

SW gesture is, elementary. UI still keyboard heavy. Commands are over a few different menus. Hope you remember and pick he right one.
15 buttons mouse help keep the keyboard away.

IV is context sensitive. Right click bring up the gesture wheel with another drop down menu. If you click and drag, only the wheel show up.
And you can cancel the wheel without selecting any command. Take that SolidSpinningWheel :stuck_out_tongue:

SW should work great if you can find and setup an old ACAD tablet.

I’ll state up front that I don’t use many mouse gestures. mostly I just use it for orienting a model or assembly to the top view. I have arrow keys set up to rotate my model 90 degrees so I just get it to one view and then rotate manually. why? because I’ve done this for 20 years. before mouse gestures I just clicked a default view and manually rotated it. this way I never had to know which view orientation my model was in. sometimes people didn’t model the front on the front plane.

I will say I somewhat disagree that it shouldn’t activate until you release over the command you want. I know I just move my mouse up with gestures and it will go to the top view. I don’t need to look at the mouse gesture wheel that pops up. granted, I only ever use the same gesture. but I think when you’ve got them memorized you don’t need or shouldn’t be required to move your eyes to double check what you want to do. you just now where to move the mouse and it works.

your suggestion would in theory slow you down because you’d have to make sure you were in the target area to select the gesture. this would require the user to have to look to that area and make sure the mouse is in the right spot. this can slow things down. I think the way it is is the fastest. is it the best? maybe not for everybody.

This really depends on how the user is using the mouse gesture
If the user only use the mouse gesture to change view (eg: mouse down = iso view), then having require the button to release for activation slow down the entire process…

If the user are using 16 mouse gesture to execute a feature (eg: extrude), then having require the button to release for activation make sense…



This should really be a user preference option imo…
But at this point, i am too afraid to ask SW to add any new feature and break others old feature /s

I prefer the S key.

I think mouse gestures is more for a chilling moment than saving time. In a past job of mine that going fast was needed, I had every main command in a hotkey, never bothered with the gestures.

Although would be nice to actually be able to set up every hotkey like the S key “box” if you want to. Then improve that to allow it to open around your mouse. I hope to see that before the machines come.

I’m one of those weirdos who didn’t start with AutoCAD, and who doesn’t find hot keys intuitive, especially when it means having to take my hand off my SpaceMousePro. I use mouse gestures extensively.

image.png
Yes, I occasionally hit the wrong one. Notice how “Escape” is in the same location in every environment?

Gesture is hold the right click and move the mouse.
In IV the same menu popup right click. So user can use it like popup menu.
It’s context sensitive. So different menu with relevant commands popup. Similar to the “S” menu in SW.
It take some time to get used to and customize it.
First put same commands on same location.
Track your own command usage and put most used commands on the gesture wheel.

The other thing with SW, some commands still don’t have icon and only accessible in menu.
Like one of your car window is not powered.

Isn’t the whole point of AI (aka machine learning) is for the software to learn what functions you use, in what order and then from the learning present what it has learned as the next command? Why are we still trying to customize our environments, manually?

mpaul

your suggestion would in theory slow you down because you’d have to make sure you were in the target area to select the gesture. this would require the user to have to look to that area and make sure the mouse is in the right spot. this can slow things down. I think the way it is is the fastest. is it the best? maybe not for everybody.

If you take a look at the video in the initial thread where I posted how the mouse gestures can be enhanced to be much more than just the initial ring of commands then letting go of the button to confirm, IMO, makes sense. With what you’ve described, with just shooting for the top view, I think that how it is now is perfect for you. But you can see in the video that I’m adding in functionality from the property manager into the second ring, the idea behind this is that staying on screen closer to your model means less travel away which, in the end, is faster.

If nothing else this even proves my point even further is that the whole UI/UX aspect of Solidworks needs to go further to be allowed to be adjusted on a per user basis.

I think there’s a battle here between UI consistency and evolution. The longer I use something the more sensitive I get about things moving around unexpectedly. Habit and muscle memory is still #1 in UI IMO. Think of it this way; how many people would like an “AI” keyboard that would remap the letters based on how often you use them? This would be a simple arduino project, start with QWERTY, but then improve the layout based on the most frequent keys pressed and put them in the home row. Or if someone would have studied Buddy Rich playing and decided to reposition the set for him.

Nah, last thing I want is the software moving things around on the UI for me.

Drop downs would be a nice place to start. SE has the option (that’s off by default for some reason) to stick with the last selected drop-down option across the UI. SW has this half implemented, depending on which toolbar you’re using. What would be kinda neat is if it would just keep the most used one on top so in the rare cases I go outside normal workflow the most common is still on top. But don’t get smart and reorder the drop down in rank of popularity, that would suck.

My last reason is even if the software has control of the UI they’ll still probably reset it on version update and then take a couple of months for it to relearn and stop moving things around.

With all respects, I think you’ve misunderstood what a mouse gesture is. They are not on-screen-buttons. they are gestures.
Gestures started with browsers like Chrome, Firefox & Opera. You set a function to a gesture, press and hold mouse button and by moving it to a direction you fire the function. So they are gestures (motion) not buttons. You have to remember all the functions you’ve set and use them when you need them. What Solidworks offers, as its name suggests, is gesture.

What you see when you right click, is just a reminder (if you have forgotten) what has been set for each gesture. So expecting a pause after a gesture for release of button, just slows me down. If you remember the direction of the functions you’ve set, why a pause is necessary? If you don’t remember and need to click and look for what you want to do and select one of them, gesture is not for you. You may want to go for S key or other shortcuts.

Yes, the whole point of AI is computer design everything. We won’t have a job soon UU
Better start learning how to program AI before AI learn how to program themselves.

No, I don’t want AI to tell me I should revolve when I want extrude.
But you revolve 51% in last hour … o[

NX UI has the same (remember last function call in menu). I would assume SE and NX would share a lot of UI functionality. I cannot confirm, but I would assume that both NX and SE, being owned by Siemens, would utilize the same UI engines and design to keep costs down and the ability to work between the two software more easily.

The icon color change in 2016 drove me nuts.

I’m a one hand on the keyboard and one on the mouse. My assembly mate and sketch constraint shortcuts have been the same since IV5. Also use the S key but very modified, hate flyouts.

Last job had to take a SW proficiency test, fortunately I was able to load my SW settings otherwise I’d been really fumbling around.

This whole “Command Prediction Toolbar” goes counter to muscle memory…which is where real productivity comes from.

I get the idea behind it, but it moving commands around constantly forces for me to mentally hunt for the recognizable icon / text. It would be like a police officer going for his gun on his right hip only to find it’s been replaced by the flashlight that was on his left because he uses the flashlight more.

I just need the software to give me options so I can set it to the way I work most efficiently. I still haven’t forgiven Microsoft for the ribbon, and now everyone forces you to use it. Kudos to SolidWorks by giving us a choice to use the command manager or not…and leaving the menus intact even with it on.
image.png