Hey all- I am Foster Brereton and Principal Scientist for this UI effort. Suffice it to say, the article and this thread have had their impact on the people behind the software. We are aware we got a lot of things wrong. As the primary technical lead on the UI migration, a lot of the implementation details ultimately fall up to me.
Two things I can tell you: the engineering team does care about Photoshop (I’ve been on the team more than 15 years for a reason) and this migration is far from over for us.
These sharp edges are acknowledged, and we are working on them. Some of them are already addressed.
I know this will be of little comfort to some. But to the rest, we are still here. If you have any questions I’ll do my best to answer them.
Why were these sharp edges not discovered in UAT? It seems hard to believe that the people who make Photoshop do not use Photoshop to the degree necessary to notice these regressions. How will you avoid these kinds of problems in future updates as your transition to a modern design continues?
In a word: they were. We do use Photoshop (though not to the level or extent of most users) and noticed the regressions. Shipping software is in perennial tension between getting it perfect and getting it out the door.
Going forward, we would like them fixed, too. Personally my hope is the message from user feedback like this is heard loud and clear, and we respond appropriately.
I'm far from an adobe fan, but I feel the need to defend them just a little here.
Everyone with non-trivial software has to do this to some extent. Perfection just isn't possible. The real measure is in where the company finds the balance. I think Adobe needs to tilt toward the perfection a bit more, but this is not something that people can do unilaterally without buy in from the very top of the chain, which I'm guessing GP is not.
Keep in mind that photoshop isn't the near-monopoly that many people think it is, especially in light of generative AI. If they take too long to ship features, it will similarly be criticized by paying customers who feel Photoshop is hobbled.
They said they were principal scientist for this change and voluntarily took partial responsibility. I think if you look at this in a greater context of things were perfectly fine for decades and then they broke then I'm not sure how it's at all defensible. Just looking at the old vs new modals, there aren't even really new features. It's just breaking it
Yeah, but there’s a world of difference between “not perfect” and “we rounded a few sliders, and now the modal is a nightmare to use”.
Rolling out a new UI for such a staple piece of software is smart. But doing it the way they are doing is absurd. Why even release it when you have basically nothing to show besides a broken modal, rounded sliders and a couple things made thicker? That’s not being mindful, that’s just someone’s unfinished staging build that got pushed to production by mistake. It’s insane that they’re doing this to Photoshop (of all apps). And honestly, quite insaner that anyone would defend anything from Adobe after all the crap they’ve pulled (and continue to pull) over the years.
They are wrong. They are going about it the wrong way. And paying customers deserve a hell of a lot more. Adobe OWES us a better treatment. Big time.
And unfortunately Photoshop very much is the monopoly many think it is. Those complaining about Photoshop being hobbled because it can’t hallucinate AI slop are not Adobe’s target audience and main source of income. Adobe’s only as crap as it is now exactly because it knows it holds basically the entire graphic design industry in a stranglehold. No other apps are currently even close to Adobe’s in terms of compatibility, functionality and support—unfortunately. I wish someone would come and claim Adobe’s crown, but that is simply not happening.
> Shipping software is in perennial tension between getting it perfect and getting it out the door.
First do no harm. Changing functionality that works is not in tension with getting regressions out the door. Assure it is working before shipping by hiring testers that use the product to the level or extent of most users.
> We do use Photoshop (though not to the level or extent of most users) and noticed the regressions.
Is there something you want to tell us about management? This is crazy, if what you mean is you know you broke this for power users but shipped it anyways, or that you don't have power-users on payroll to constantly test your product that you can call "part of the team".
Disclosing my bias up front: I think Adobe is an evil company and I actively avoid them. This is not personal against Adobe employees however. I know there are a lot of people who want things to be better and work their asses off toward that goal.
Indeed, I don't think most people can appreciate how hard the tension is between shipping and perfection. As a fellow perfectionist, it kills me to ship things that I know aren't perfect, but I've had to work on becoming more of a pragmatist because if I had my perfectionist way, shipping would take years and feedback loops would be so long that it would be somewhat self defeating (though that's a personal problem). I appreciate you taking the time to respond here, even knowing you'll catch some heat.
We are not talking about perfection. We are talking about breaking a stable piece of software and affecting people's muscle memory with minimal upside to users. People provide for their families with Photoshop. It is unacceptable to push a change that impacts millions of people and then throw your hands up in the air and claim that this is all inevitable because perfection is impossible.
If this was a startup or new software finding a market fit it would be different. This is industry standard, professional software that impacts livelihoods. More thought should go into each release because of this fact.
> Shipping software is in perennial tension between getting it perfect and getting it out the door.
Photoshop is the premiere image editor that has been in existence for decades. The issues you are responding to are fundamental changes to how the application behaves. It defies belief that your team and its processes have this little respect for dedicated users who have spent thousands of dollars on your product over the course of years. I understand shipping software. Do you understand your users?
> Why were these sharp edges not discovered in UAT?
These kinds of sharp edges should *never* have made it as far as UAT. All of these should have been caught in the first prototype and never made it beyond that point.
The fact that they made it all the way to the shipping product shows that too many responsible parties were asleep at the switch.
Obviously they should have a few power users on payroll that find these obvious regressions quickly, and we can call them part of the team who make Photoshop. I'm not sure why this, and what the lead scientist said is valid justification. Just hire "people that use Photoshop". If they already do this, then the people that make Photoshop use Photoshop to a sufficient degree.
But moreover, if one has developed Photoshop for 15 years, I'm pretty sure they are aware of power user table-stakes features.
And then one more point:
> Why?
Because that's what it takes to develop high quality software tools. This shouldn't even be up for debate when charging money for software.
They actually have a very slick and very active beta program. I use the betas 99% of the time, and they are practically weekly updated. I'm surprised something like this wasn't reported en masse very quickly. Maybe it's just not annoying enough -- it doesn't reach the threshold for someone to file an issue. I know it's the sort of regression where I would huff and puff and get on with my day.
Kudos for engaging! It’s been a while since I used Photoshop on a daily basis, but my impression was always that the UI felt a bit stuck in time. Like no one had thought about how to make little things in the UI better in a way that improves daily work. You’d see new ideas crammed in on top, but very little refinement of what was already there. (Is the ”Wind” filter still only possible to apply left or right, not up or down?)
I think a nice outcome of this would be if Adobe recognized how much these things matter to power users, and that it’s possible to improve existing workflows without disrupting them, and without just adding something new that sits awkwardly side by side with the existing features. Maybe rather than fixing the issues that were introduced, you could aim for something that is thoroughly better, as you need to work through everything anyway.
Thank you for the constructive feedback. I agree power users are a "keystone species" and improving their workflows will benefit everyone.
Improving existing workflows without disrupting them is extremely hard to do, and often "improvement" is in the eye of the beholder. To be clear, I am not excusing issues within the application that we must fix. The team is working hard across multiple departments to gain consensus on how best to move Photoshop forward, including gathering feedback from users.
Clearly broken and unfinished modals such as those in the blog post don’t require much more than a couple of devs to fix, and yet this behaviour is still present in the latest version shipped to customers.
I find it hard to believe that the team is “working hard” to gain consensus on how best to move forward when such simple things make it to production.
Does anyone at Adobe ACTUALLY use Photoshop? Didn’t anyone stop for a moment to think that shipping in such a taste was a terrible idea?
Yeah, I understand how difficult working at a company like Adobe can be. But it's still hard for me to sympathize when these dialogs that don't contain much more than a bunch of text fields and buttons are just halfway done and then shipped.
It's not like focusing on the first field when a dialog opens requires months of work. I'm genuinely confused by how this stuff happens; I feel like a regression like this should have been caught in the first PR review and fixed.
Are there OKRs for converting as many dialogs as possible to the new UI library? Or how does that happen?
With all due respect, the customer doesn’t care. You served a raw turkey on Thanksgiving and act like there is nothing that could be done to remedy this. Under no circumstances was leaving it in the oven longer an option for some reason. You knew it was raw, so why did you serve it?
I keep seeing the same issue over and over again with other companies as well. “Sorry you are disappointed but our internal processes, or we had to do this because of deadlines, yadda yadda, blah blah.”
Does anyone stop and think why they are developing or shipping a product? Its not for you to have an overly complicated development, build, or review process. It’s not for you to hit your quota of installed upgrades or versions shipped per quarter. It’s for people to use your product. Your product has utility, and the customer is your client, not the other way around.
Why does Camera Raw in Photoshop always open as some weird full-screen dialog box? I want it maximized, but as its a dialog it covers even the task bar, and lacks the usual window controls. If I'm doing denoise, which can take minutes, then I have to try and alt-tab the whole app into the background just to see the taskbar.
Also, for as long as I've been using Camera Raw, on every PC, the mouse lags like absolute crazy on the crop tab, to the point where I have given up using it.
This is only a tangential question, but anyway: I‘ve read several years ago that since around Photoshop version 4, 99% of the work is about keeping the application UI usable with all these new features, and not about „hard“ technical challenges within the features themselves. Is that true?
That sounds plausible. Most of the features are kind of gimmicky bolt-ons added piecemeal and not really integrated with each-other. They make for cool 10-second demos but then most users ignore them because they aren't part of a coherent system. The result is a menu after menu of gimmicks, like a cabinet of hyper-specialized kitchen tools bought from infomercials. There has been limited product vision about the core abstractions and their basic composability. If you give a skilled user a photoshop version from the early 2000s they'll largely be able to do what they need, because there hasn't really been much fundamental innovative improvement in the past ~25 years.
Microsoft actually does a fairly good job with this. Here's a part of a talk that goes over a single feature in PowerPoint (a slide animation that morphs the contents of one slide into a different slide) and demonstrates how this feature interacts with the enormous existing PowerPoint feature set in interesting ways. https://youtu.be/_3loq22TxSc?t=1409 It's obviously a stupid gimmicky feature but whatever team Microsoft put on it were clearly overachievers.
Thanks for sharing that fascinating video! It seems like a fair bit of work went into it. One criticism I have is that it is undiscoverable and opaque; it is not obvious how it is going to behave. I wonder how many users are aware of it.
There is a lot of effort put in to making the application usable, no question. At the same time, we have added a litany of new features and tools since Photoshop 4, many of which I would describe as extremely technically challenging.
Well it depends on how you define it. We have done several re-skinnings. This migration is a transition from one framework to another, which we’ve never done before. The reasons to leave the old framework lie on several axes (eng, design, product, etc.)
What was the point of this upgrade? Do you not actually use Photoshop yourself, or have people on your team that use Photoshop? Aside from the mea culpa and assurance it will be fixed, user deserve an explanation for why this basic, obvious buggy functionality wasn't discovered immediately during development? Like seriously, you should explain yourself.
I’m an artist who spends most of her time in Illustrator. Should I be expecting a UI rework to this “Spectrum Design Language” too, or is that team’s obsession with AI garbage going to take precedent?
I’m sure not looking forwards to it, there’s stuff that was “redesigned” the last time this happened a decade or so back that’s still the absolute shittiest thing that works and hasn’t changed at all from then.
Visually the results are very compelling! It also gives an at-a-glance intuition about the image that the bar-style options fail to convey. I am a fan.
Yes, which is why it's easy to then convince people to evacuate. People do die on Everest, including EBC treks from altitude sickness alone, so severe symptoms usually lead to taking the trekking back down the mountain.
Putting code with side effects into an assert is asking for trouble. Compile with NDEBUG set and the effects mysteriously disappear! Anything beyond an equality expression or straight boolean should be avoided.
I once spent several days debugging that same mistake. Stuff worked perfectly in tests but broke misteriously in production builds. Couldn't stop laughing for a few minutes when I finally figured it out.
Related our logging system has a debug which is not logged by default but can be turned on if a problem in an area is found (in addition to the normal error/info which is logged). I had the idea that if a test fails we should print all these debugs - easy enough to turn on but a number of tests failed because of side effects that didn't show up when off.
i'm trying to think of how/if we can run tests with all logging off to find the error and info logs with side effects.
This is just a symptom of a bad assert() implementation, which funny enough is the standard. If you properly (void) it out, side effects are maintained.
assert() is meant to be compiled away if NDEBUG is defined, otherwise it shouldn't be called assert(). Given that assert() may be compiled away, it makes sense not to give it anything that has side effects.
Abseil has the convention where instead of assert(), users call "CHECK" for checks that are guaranteed to happen at run time, or "DCHECK" for checks that will be compiled away when NDEBUG is defined.
Genuine question, does Rust know if `expensive_to_compute()` has side effects? There are no params, so could it be compiled out if the return value is ignored? Ex: `expensive_to_compute()` What about: `(void) expensive_to_compute()`?
No, in general Rust doesn't (and can't) know whether an arbitrary function has side effects. The compiler does arguably have a leg up since Rust code is typically all built from source, but there's still things like FFI that act as visibility barriers for the compiler.
No, Rust is the same as C++ in terms of tracking side effects. It doesn't matter that there are no parameters. It could manipulate globals or call other functions that have side effects (e.g. printing).
An assertion can be arbitrarily expensive to evaluate. This may be worth the cost in a debug build but not in a release build. If all of assertions are cheap, they likely are not checking nearly as much as they could or should.
Possibly but I've never seen it in practice that some assert evaluation would be the first thing to optimize. Anyway should that happen then consider removing just that assert.
That being said being slow or fast is kinda moot point if the program is not correct. So my advisor to leave always all asserts in. Offensive programming.
Rust has assert and debug_assert, which are self-explanatory. But it also has an assert_unchecked, which is what other languages incl C++ call an "assume" (meaning "this condition not holding is undefined behaviour"), with the added bonus that debug builds assert that the condition is true.
Notably, like most things with "unchecked" in their name `core::hint::assert_unchecked` is unsafe, however it's also constant, that is, we can do this at compile time, it's just promising that this condition will turn out to be true and so you should use it only as an optimisation.
Necessarily, in any language, you should not optimise until you have measured a performance problem. Do not write this because "I think it's faster". Either you measured, and you know it's crucial to your desired performance, or you didn't measure and you are wasting everybody's time. If you just scatter such hints in your code because "I think it's faster" and you're wrong about it being true the program has UB, if you're wrong about it being faster the program may be slower or just harder to maintain.
I actually feel like asserts ended up in the worst situation here. They let you do one line quick checks which get compiled out which makes them very tempting for those but also incredibly frustrating for more complex real checks you’d want to run in debug builds but not in release.
The problem is the code unconditionally dereferences the pointer, which would be UB if it was a null pointer. This means it is legal to optimize out any code paths that rely on this, even if they occur earlier in program order.
When NDEBUG is set, there is no test, no assertion, at all. So yes, this code has UB if you set NDEBUG and then pass it a null pointer — but that's obvious. The code does exactly what it looks like it does; there's no tricks or time travel hiding here.
Right so strictly speaking C++ could do anything here when passed a null pointer, because even though assert terminates the program, the C++ compiler cannot see that, and there is then undefined behaviour in that case
> because even though assert terminates the program, the C++ compiler cannot see that
I think it should be able to. I'm pretty sure assert is defined to call abort when triggered and abort is tagged with [[noreturn]], so the compiler knows control flow isn't coming back.
Shouldn't control flow diverge if the assert is triggered when NDEBUG is not defined? Pretty sure assert is defined to call abort when triggered and that is tagged [[noreturn]].
I'm sorry, but what exactly is the problem with the code? I've been staring at it for quite a while now and still don't see what is counterintuitive about it.
A lot of compilers will optimize out a NULL pointer check because dereferencing a NULL pointer is UB.
Because assert will not run the following code in the case of a NULL pointer, AFAIK this exact code is still defined behavior, but if for some reason some code dereferenced the NULL pointer before, it would be optimized out - there are some corner cases that aren't obvious on the surface.
This kind of thing was always theoretically allowed, but really started to become insidious within the past 5-10 years. It's probably one of the more surprising UB things that bites people in the field.
GCC has a flag "-fno-delete-null-pointer-checks" to specifically turn off this behavior.
This is an actual Linux kernel exploit caused by this behavior where the compiler optimized out code that checked for a NULL pointer and returned an error.
Sure, but none of that is relevant to just the code snippet that was posted. The compiler can exploit UB in other code to do weird things, but that's just C being C. There's nothing unexpected in the snippet posted.
The issue is cause by C declaring that dereferencing a null pointer is UB. It's not really an issue with assertions.
You can get the same optimisation-removes-code for any UB.
> There's nothing unexpected in the snippet posted.
> The issue is cause by C declaring that dereferencing a null pointer is UB. It's not really an issue with assertions.
> You can get the same optimisation-removes-code for any UB.
I disagree - It’s a 4 line toy example but in a 30-40 line function these things are not always clear. The actual problem is if you compile with NDEBUG=1, the nullptr check is removed and the optimiser can (and will, currently) do unexpected things.
The printf sample above is a good example of the side effects.
> The actual problem is if you compile with NDEBUG=1
That is entirely expected by any C programmer. Sure they named things wrong - it should have been something like `assert` (always enabled) and `debug_assert` (controlled by NDEBUG), as Rust did. And I have actually done that in my C++ code before.
But I don't think the mere fact that assertions can be disabled was the issue that was being alluded to.
I wrote the comment, assertions being disabled was exactly what was being alluded to.
> that is entirely expected by any C programmer
That’s great. Every C programmer also knows to avoid all the footguns and nasties - yet we still have issues like this come up all the time. I’ve worked as a C++ programmer for 12 years and I’d say it’s probably 50/50 in practice how many people would spot that in a code review.
It's definitely a footgun, but the compiler isn't doing weird stuff because the assertions can be disabled. It's doing weird stuff because there's UB all over the place and it expects programmers to magically not make any mistakes. Completely orthogonal to this particular (fairly minor IMO) footgun.
> I’ve worked as a C++ programmer for 12 years and I’d say it’s probably 50/50 in practice how many people would spot that in a code review.
Spot what? There's absolutely nothing wrong with the code you posted.
Depends on where you're coming from, but some people would expect it to enforce that the pointer is non-null, then proceed. Which would actually give you a guaranteed crash in case it is null. But that's not what it does in C++, and I could see it not being entirely obvious.
If you don't even know what that would mean then it's premature to declare that nothing works that way. Understanding the meaning is a prerequisite for that.
In this case, it may help to understand that e.g. border control enforces a traveler's permission to cross the border, then lets them proceed.
Not just for functional programmers. Prints and other I/O operations absolutely are side effects. That's not running counter to the point being made. Print in an assert and NDEBUG takes away that behavior.
You're right of course. I was thinking specifically of printing log/debug statements in the assert(..), but that usually only happens if the assert(..) fails and exits, and in that case the "no side effects" rule no longer matters.
Before preaching it to others, the writing of a homily or sermon first needs to affect the heart of the one delivering it. Such heart-work is exceedingly difficult if not impossible with AI.
Any word on how much more memory safe the implementation is? If passing a previous test suite is the criteria for success, what has changed, really? Are there previous memory safety tests that went from failing to passing?
I am very interested to know if this time and energy spent actually improved memory safety.
Other engineers facing the same challenges want to know!
If the previous impl had known memory safety issues I'd imagine they'd fix them as a matter of priority. It's hard to test for memory safety issues you don't know about.
On the rust side, the question is how much `unsafe` they used (I would hope none at all, although they don't specify).
It is entirely possible a Rust port could have caught previously unknown memory safety issues. Furthermore, a Rust port that looks and feels like C++ may be peppered with unsafe calls to the point where the ROI on the port is greatly reduced.
I am not trying to dunk on the effort; quite the contrary. I am eager to hear more about the goals it originally set out to achieve.
> We do [cubic curve fitting] all the time in image processing, and it works very well. It would probably work well for audio as well, although it's not used -- not in the same form, anyway -- in these applications.
Is there a reason the solution that "works very well" for images isn't/can't be applied to audio?
The short answer is that our eyes and ears use very different processing mechanisms. Our eyes sense using rods and cones where the distribution of them reflects a spatial distribution of the image. Our ears instead work by performing an analogue forier transform and hearing the frequencies. If you take an image and add lots of very high frequency noise, the result will be almost indistinguishable, but if you do the same for audio it will sound like a complete mess.
The only one that says it is a cubic interpolation is the "Renoise 2.8.0 (cubic)" one, the spectrogram isn't very promising with all sorts of noise, intermodulation and aliasing issues. And, by switching to the 1khz tone spectrum view you can see some harmonics creeping up.
When I used to mess with trackers I would sometimes chose different interpolations and bicubic definitely still colored the sound, with sometimes enjoyable results. Obviously you don't want that as a general resampler...
Just to note that this site hasn't been updated for a while.
Much better, more modern and with automated upload analysis site would be [1] although it is designed for finding the highest fidelity resampler rather than AB comparisons.
If the Standard has anything to say about compatibility between different language versions, I doubt many developers know those details. This is breeding ground for ODR violations, as you’re likely using compilers with different output (as they are built in different eras of the language’s lifetime) especially at higher optimization settings.
This flies in the face of modern principles like building all your C++, from source, at the same time, with the same settings.
Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design. Unless your whole team is a moderate-level language lawyer, you must enforce this by some other means or risk some really gnarly issues.
> Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design.
Historically, C++ compilers' name mangling scheme for symbols did precisely the same thing. The 2000-2008 period for gcc was particularly painful since the compiler developers really used it very frequently, to "prevent these kinds of issues by design". The only reason most C++ developers don't think about this much any more is that most C++ compilers haven't needed to change their demangling algorithm for a decade or more.
C++’s name mangling scheme handles some things like namespaces and overloading, but it does not account for other settings that can affect the ABI layer of the routine, like compile time switches or optimization level.
The name mangling scheme was changed to reflect things other than namespaces and overloading, it was modified to reflect fundamental compiler version incompatibilities (i.e. the ABI)
Optimization level should never cause link time or run time issues; if it does I'd consider that a compiler/linker bug, not an issue with the language.
Two things I can tell you: the engineering team does care about Photoshop (I’ve been on the team more than 15 years for a reason) and this migration is far from over for us.
These sharp edges are acknowledged, and we are working on them. Some of them are already addressed.
I know this will be of little comfort to some. But to the rest, we are still here. If you have any questions I’ll do my best to answer them.
reply