Hacker Newsnew | past | comments | ask | show | jobs | submit | yvdriess's commentslogin

Ignoring the snark for a second. It's not because you are unfamiliar with the go toolchain that it's inherently bad, nor does it put you in a good position to give accurate criticism.

- "tea" is an explicit alias that was added to the import statement in the tutorial examples, which you did not reflect in your snippet:

    import tea "charm.land/bubbletea/v2"

- The following also just works as you expected, but directly assumed wouldn't work:

  import github.com/charmbracelet/bubbletea
The only surprise here is that the repository authors decided to change the name of the module between v1 and v2 of the package. The git branch tagged v2 contains a module named 'charm.land/bubbletea', earlier v1 branches are named 'github.com/charmbracelet/bubbletea'. That's on them for breaking convention and surprising users, the go toolchain does not factor into, this beyond supporting versioning by naming convention.


I would be fine with a chaotic bubbly mess of an outside presentation, if the libraries were more robust and foundational. At the moment the underlying code, when you scratch the surface, have the feel of things thrown together to be replaced at later date.

I bounced off of bubble tea not because of the aesthetics and the unhelpful naming, but because of the programming model: a MVC-architecture cribbed from the Elm language. Why? It completely takes over and rips apart my CLI structure. A CLI is not a DOM or System.Windows.Forms, MVC is scattering around logic and adding indirection layers needlessly.

I am still using huh? and vhs, but their libraries have the feel of looking really good in demo and in the provided examples, but break down quickly when coloring just outside those intended lines.


Everything that was old will become new again. Content/structural version control used to be a research field. Pharo still uses one afaik https://scg.unibe.ch/archive/papers/Nier13bMonticello.pdf


Yeah I am actually super excited for these new kind of innovations.

These things are genuinely not letting me sleep these days.


Yeah. It's telling that this story is about their discord channel, not Teams.


The problem is that there is no baseline for measuring GC overhead. You cannot turn it off, you can only replace and compare with different strategies. For example sbrk is technically a noop GC, but that also has overhead and impact because it will not compact objects and give you bad cache behavior. (It illustrates the OP's point that it is not enough to measure pauses, sbrk has no pauses but gets outperformed easily.)

You could stop collecting performance counters around GC phases, but you even if you are not measuring the CPU still runs through its instructions, causing the second order effects. And as you mentioned too-short-to-measure barriers and other bookkeeping overheads (updating ref counters etc) or simply the fact that some tag bits or object slots are reserved all impact performance.

There is a good write-up of the problem and a way to estimate the cost based on different GC strategies, as you suggested, here: https://arxiv.org/abs/2112.07880

The way I found to measure a no-GC baseline is to compare them in an accurate workload performance simulator. Mark all GC and allocator related code regions and have the simulator skip all those instructions. Critically that needs to be a simulator that does not deal with the functional simulation, but gets it's instructions from a functional simulator, emulator or PIN tool that does execute everything. It's laborious, not very fast and impractical for production work. But, it's the only way I found to answer a question like "What is the absolite overhead of memory management in Python?". (Answer: lower bound walltime sits around +25% avg, heavily depending on the pyperformance benchmark)


Mandatory mention of notable actor languages:

  - Erlang and Elexir
  - E
  - AmbientTalk


And they could be 0- or 1- indexed? :P


TSO? Did you mean tso.architecture.cpu?


Either that or https://www.ibm.com/docs/en/zos-basic-skills?topic=interface...

I would prefer we didn’t have so many collisions in the TLA space.


The last company I worked at created TLAs for everything, or at least engineering always did. New company doesn't seem to have caught that bug yet though, thankfully.


> > Linear virtual addresses were made to be backwards-compatible with tiny computers with linear physical addresses but without virtual memory.

> That is false. In the Intel World, we first had the iAPX 432, which was an object-capability design. To say it failed miserably is overselling its success by a good margin.

That's not refuting the point he's making. The mainframe-on-chip iAPX family (and Itanium after) died and had no heirs. The current popular CPU families are all descendents of the stopgap 8086 evolved from the tiny computer CPUs or ARM's straight up embedded CPU designs.

But I do agree with your point that a flat (global) virtual memory space is a lot nicer to program. In practice we've been fast moving away from that again though, the kernel has to struggle to keep up the illusion: NUCA, NUMA, CXL.mem, various mapped accelerator memories, etc.

Regarding the iAPX 432, I do want to set the record straight as I think you are insinuating that it failed because of its object memory design. The iAPX failed mostly because of it's abject performance characteristics, but that was in retrospect [1] not inherent to the object directory design. It lacked very simple look ahead mechanisms, no instruction or data caches, no registers and not even immediates. Performance did not seemed to be a top priority in the design, to paraphrase an architect. Additionally, the compiler team was not aligned and failed to deliver on time, which only compounded the performance problem.

  - [1] https://dl.acm.org/doi/10.1145/45059.214411


The way you selectively quoted: yes, you removed the refutation.

And regarding the iAPX 432: it was slow in large part due to the failed object-capability model. For one, the model required multiple expensive lookups per instruction. And it required tremendous numbers of transistors, so many that despite forcing a (slow) multi-chip design there still wasn't enough transistor budget left over for performance enhancing features.

Performance enhancing features that contemporary designs with smaller transistor budgets but no object-capability model did have.

Opportunity costs matter.


I agree on the part of the opportunity cost and that given the transistor budgets of the time a simpler design would have served better.

I fundamentally disagree on putting the majority of the blame on the object memory model. The problem was that they were compounding the added complexity of the object model with a slew of other unnecessary complexities. They somehow did find the budget to put the first full IEEE floating point unit on the execution unit, implemented a massive[1] decoder and microcode for the bit-aligned 200+ instruction set and interprocess communication. The expensive lookups per instructions had everything to do with cutting caches and programmable registers, not any kind of overwhelming complexity to the address translation.

I strongly recommend checking the "Performance effects of architectural complexity in the Intel 432" paper by Colwell that I linked in the parent.

[1] die shots: https://oldbytes.space/@kenshirriff/110231910098167742


I huge factor in iAPX432 utter lack of success, were technological restrictions, like pin-count limits, laid down by Intel Top Brass, which forced stupid and silly limitations on the implementation.

That's not to say that iAPX432 would have succeeded under better management, but only to say that you cannot point to some random part of the design and say "That obviously does not work"


Your critique applies to measuring one or a handful of instructions. In practice you count the number of cycles over million or billion instructions. CPI is very meaningful and it is the main throughput performance metric for CPU core architects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: