Hacker Newsnew | past | comments | ask | show | jobs | submit | flohofwoe's commentslogin

> "redraw everything the whole frame" and "don't do any diffing" sound insane in this regard.

You need to consider that a web browser with its millions of lines of code in the DOM and rendering engine is pretty much the worst case for "redrawing a complex UI each frame", especially since the DOM had been designed for mostly static 'documents' and not highly dynamic graphical UIs.

Add React on top and the whole contraption might still be busy with figuring out what has changed and needs to be redrawn at the time an immediate mode UI sitting directly on top of a 3D API is already done rendering the entire UI from scratch.

A native immediate mode UI will easily be several hundred times less code (for instance Dear ImGui is currently just under 50kloc 'orthodox C++').


When the UI is highly dynamic/animated it needs to be redrawn each frame also in a 'retained mode' UI framework.

When the UI is static and only needs to change on user input, an immediate mode UI can 'stop' too until there's new input to process.

For further low-power optimizations, immediate mode UI frameworks could skip describing parts of the UI when the application knows that this part doesn't need to change (contrary to popular belief, immediate mode UI frameworks do track and retain state between frames, just usually less than retained mode UIs - but how much state is retained is an internal implementation detail).


The problem is that widgets still need to store state somewhere, and that storage space needs to be reclaimed at some point. How does the system know when that can be done? I suppose the popular approach is to just reclaim space that wasn't referenced during a draw.

However ...

When you have a listbox of 10,000 rows and you only draw the visible rows, then the others will lose their state because of this.

Of course there are ways around that but it becomes messy. Maybe so messy that retained mode becomes attractive.


> How does the system know when that can be done?

At the earlist in the first frame the application UI description code doesn't mention an UI item (that means UI items need a persistent id, in Dear ImGui this is a string hash, usually created from the item's label which can have a hidden `##` identifier to make it unique, plus a push/pop-id stack for hierarchical namespacing.

> then the others will lose their state because of this

Once an item is visible, the state must have been provided by the application's UI description code, when the item is invisible, that state becomes irrelevant.


> when the item is invisible, that state becomes irrelevant.

What happens when the item moves out of view, e.g. because the user scrolls down?

State should be preserved, because the user might scroll back up.


Once the item becomes visible, the application's UI code provides the item's state again.

E.g. pseudocode:

    for (firstVisibleItemIndex .. lastVisibleItemIndex) |itemIndex| {
        ui_list_item(itemIndex, listItemValues[itemIndex]);
    }
For instance Dear ImGui has the concept of a 'list clipper' which tells the application the currently visible range of a list or table-column and the application only provides the state of the currently visible items to the UI system.

Ok, but now items 1,000 through 10,000 are deleted from the data container.

How does the immediate mode system know that the corresponding state can be deleted too?

Does the system provide tools for that or does the burden lie on my application code?


Same way as for regular ui items, if the application's ui code no longer "mentions" those items, their state can be deleted (assuming the immediate mode tracks hidden items for some reason).

The job of the immediate UI is to just draw the things. Where and how you manage your state is completely up to you.

It seems you assume some sort of OO model.

> When you have a listbox of 10,000 rows and you only draw the visible rows, then the others will lose their state because of this.

Well keep the state then.

Immediate mode really just means you have your data as an array of things or whatever and the UI library creates the draw calls for you. Drawing and data are separate.


> The job of the immediate UI is to just draw the things. Where and how you manage your state is completely up to you.

This is a bit oversimplified. For instance Dear ImGui needs to store at least the window positions between frames since the application code doesn't need to track window positions.


Well, I can keep the state, but a retained mode UI model does it for me :)

But then you have state in two places, user code and the retained-mode GUI framework, which need to be synced - that's where complexity creeps in. Immediate mode removes that redundancy and makes things simpler in many situations. It depends on your preference and what you're doing too, which approach suits better.

But why do you think retained mode was invented if "just drawing" is so simple?

Here's an informative explanation in the DearImgui library which chose this approach.

https://github.com/ocornut/imgui/wiki/About-the-IMGUI-paradi...


Isn't it the other way around?

The more dynamic/animated an UI is, the less there's a difference between a retained- and immediate-mode API, since the UI needs to be redrawn each frame anyway. Immediate mode UIs might even be more efficient for highly dynamic UIs because they skip a lot of internal state update code - like creating/destroying/showing/hiding/moving widget objects).

Immediate-mode UIs can also be implemented to track changes and retain the unchanged parts of the UI in baked textures, it's just usually not worth the hassle.

The key feature of immediate mode UIs is that the application describes the entire currently visible state of the UI for each frame which allows the UI code to be 'interleaved' with application state changes (e.g. no callbacks required), how this per-frame UI description is translated into pixels on screen is more or less an implementation detail.


> The more dynamic/animated an UI is, the less there's a difference between a retained- and immediate-mode API, since the UI needs to be redrawn each frame anyway. Immediate mode UIs might even be more efficient for highly dynamic UIs because they skip a lot of internal state update code - like creating/destroying/showing/hiding/moving widget objects).

That depends on the kind of animations - typically for user interfaces, it's just moving, scaling, playing with opacity etc.. that's just updating the matrices once.

So you describe the scene graph once (this rectangle here, upload that texture there, this border there) using DOM, QML etc..., and then just update the item properties on it.

As far as the end user/application developer is concerned , this is retained mode. As far as the GPU is considered it can be redrawing the whole UI every frame..


> it's just moving, scaling, playing with opacity etc.. that's just updating the matrices once.

...any tiny change like this will trigger a redraw (e.g. the GPU doing work) that's not much different from a redraw in an immediate mode system.

At most the redraw can be restricted to a part of the visible UI, but here the question is whether such a 'local' redraw is actually any cheaper than just redrawing everything (since figuring out what needs to be redrawn might be more expensive than just rendering everything from scratch - YMMV of course).


It's not about what gets redrawn but also how much of the UI state is still retained (by the GPU). Imagine having to reupload all the textures, meshes to the GPU every frame.

Something like a lot of text ? Probably easier to redraw everything in immediate mode.

Something like a lot of images just moving, scaling, around? Easier to retain that state in GPU and just update a few values here and there...


> Easier to retain that state in GPU and just update a few values here and there

It's really not that trivial to estimate, especially on high-dpi displays.

Rendering a texture with a 'baked UI' to the framebuffer might be "just about as expensive" as rendering the detailed UI elements directly to the framebuffer.

Processing a pixel isn't inherently cheaper than processing a vertex, but there are a lot more pixels than vertices in typical UIs (a baked texture might still win when there's a ton of alpha-blended layers though).

Also, of course you'd also need to aggressively batch draw calls (e.g. Dear ImGui only issues a new render command when the texture or clipping rectangle changes, e.g. a whole window will typically be rendered in one or two draw calls).


> who has to regularly turn my VPN on and off to have full internet access,

Is this because the EU or your country has blocked access, or some news site from the US blocking access from the EU because they don't want to deal with GDPR?


> but absolutely no one is going to switch from C to C++ just for dtors

The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.


No, but also skip malloc/free until late in the year, and when it comes to heap allocation then don't use example code which allocates and frees single structs, instead introduce concepts like arena allocators to bundle many items with the same max lifetime, pool allocators with generation-counted slots and other memory managements strategies.

Is there any C tutorials you know that do that so I can try to learn how to do it?

Shameless plug ;)

https://floooh.github.io/2018/06/17/handles-vs-pointers.html

This only covers one aspect though (pools indexed by 'generation-counted-index-handles' to solve temporal memory safety - e.g. a runtime solution for use-after-free).


> still stay with C89

You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.


You mean what Ada and Modula-3, among others, already had before it came to C99?

Who cares who had it first, what matters is who has it, and who doesn't...

Apparently some do, hence my reply.

Just straight up huffing paint are we.

Explain why? Have you used C99 designated init vs other languages?

E.g. neither Rust, Zig nor C++20 can do this:

https://github.com/floooh/sokol-samples/blob/51f5a694f614253...

Odin gets really close but can't chain initializers (which is ok though):

https://github.com/floooh/sokol-odin/blob/d0c98fff9631946c11...


In general it would help if you would spend some text on describing what features of C99 are missing in other languages. Giving some code and assume that the reader will figure it out is not very effective.

As far as I can tell, Rust can do what it is in your example (which different syntax of course) except for this particular way of initializing an array.

To me, that seems like a pretty minor syntax issue to that could be added to Rust if there would be a lot of demand for initializing arrays this way.


I can show more code examples instead:

E.g. notice how here in Rust each nested struct needs a type annotation, even though the compiler could trivially infer the type. Rust also cannot initialize arrays with random access directly, it needs to go through an expression block. Finally Rust requires `..Default::default()`:

https://github.com/floooh/sokol-rust/blob/f824cd740d2ac96691...

Zig has most of the same issues as Rust, but at least the compiler is able to infer the nested struct types via `.{ }`:

https://github.com/floooh/sokol-zig/blob/17beeab59a64b12c307...

I don't have C++ code around, but compared to C99 it has the following restrictions:

- designators must appear in order (a no-go for any non-trivial struct)

- cannot chain designators (e.g. `.a.b.c = 123`)

- doesn't have the random array access syntax via `[index]`

> ...like a pretty minor syntax issue...

...sure, each language only has a handful minor syntax issues, but these papercuts add up to a lot of syntax noise to sift through when compared to the equivalent C99 code.


In Rust you can do "fn new(field: Field) -> Self { Self { field } )" This is in my experience the most common case of initializers in Rust. You don't mention one of the features of the Rust syntax, that you only have to specify the field name when you have a variable with the same name. In my experience, that reduces clutter a lot.

I have to admit, the ..Default::default() syntax is pretty ugly.

In theory Rust could do "let x: Foo = _ { field }" and "Foo { field: _ { bar: 1 } }". That doesn't even change the syntax. Its just whether enough people care.


designated initializers are really great and it's really annoying that C++ has such a crappy version of them. I wish there was a way to tell the compiler that the default value of some fields should not necessarily be 0, though it's ergonomic enough to do that anyway with a macro, since repeated values override.

i.e.

  struct foo { int a; struct { float b; const char * c } d; }; 
  #define DEFAULT_FOO  .a = 1 .d = { .b = 2.0f, .c = "hello" }
 
  ...
  struct foo bar = { DEFAULT_FOO, .a = 2 }

Desctructors are only comparable when you build an OnScopeExit class which calls a user-provided lambda in its destructor which then does the cleanup work - so it's more like a workaround to build a defer feature using C++ features.

The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.

> Anyone who writes C should consider using C++ instead

Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)


> in the real world we need at least polymorphism and operator overloading

Maybe in your real-world ;)

Building your game code around classes with virtual methods has been a bad idea since at least the early 2000s (but both static and dynamic polymorphism is something that Zig can do just fine when needed), and the only important use case for operator overloading in game-dev is vector/matrix math, where Zig is going down a different road (using builting vector types, which maybe one day will be extended with a builtin matrix type - there is some interest in using Zig for GPU code, and at least this use cases will require proper vector/matrix primitives - but not operator overloading).


The optimization work happens in the LLVM backend, so in most cases (and using the same optimization and target settings - which is an important detail, because by default Zig uses more aggressive optimization options than Clang), similar Zig and C code translates to the exact same machine code (when using Clang to build the C code).

The same should be true for any compiled language sitting on top of LLVM btw, not just C vs Zig.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: