I have had a quick play with this patch and some bits seem to be working much better.
I definitely can't see any difference between pressing OK, EXIT, or letting the confirmation popup timeout, so we can scratch that one off the list as "inexplicably sovled".
Haven't tried playing with this feature yet.
The mount status did appear to be consistent between the two screens in all tests.
Will get to these ones too.
I didn't spot any inconsistencies here, but didn't specifically test for it.
Didn't test these. Required a bit more under-the-hood checks, but assume prl is able to verify these adequately already.
Working as expected. Unexpansion provided no prompt, but re-expansion prompted again.....very handy for misclicks.
Yep, all good there. no matter how the popup was dismissed.
This one also seems to work nicely now.
Note that having same-named NFS and CIFS shares may or may not be good practice, but it is the default arrangement for QNap NAS devices, and I suspect will be similar on other consumer NAS devices.
I found a new bug though. Or at least one that has not been reported to date. it has to do with deletion of failed mounts.
Say I have a number of mounts to a server that are all configured and working. Some are NFS and some are CIFS.
I then create a mount to a share, and save it, but I do not have permissions to access that share, so the mount fails.
It shows up correctly as failed, and all other mounts to that same server continue to work fine.
If I then go and delete that failed mount, then one of the other working mounts, specifically the one listed immediately before that deleted failed mount will then itself fail.
If I open up the newly failed mount, and re-save it, it starts working again.
I didn't dive into the config files to try to work out what was going on there, but it did seem that deleting a failed mount would poison it's preceding neighbour somehow. It didn't seem to matter if that preceeding mount was CIFS or NFS, both types would fail.