Hacker Newsnew | past | comments | ask | show | jobs | submit | gibibit's commentslogin

Not "literative" programming, but "literate" programming. The official Knuth page: https://www-cs-faculty.stanford.edu/~knuth/lp.html

What more do you want an OS to do? What do you expect Apple to add to MacOS? Just don't break stuff. Fix bugs. Once in a while some new kind of hardware or network protocol might need to be added. That's it.


If done with Electron, that will be 100 MB.

Or with Tauri, 5 MB. Which, amazingly, seems tiny.


Redbean would probably be similarly small


I am always curious how different C programs decide how to manage memory.

In this case there are is a custom string library. Functions returned owned heap-allocated strings.

However, I think there's a problem where static strings are used interchangably with heap-allocated strings, such as in the function `string class_simple_name(string full)` ( https://github.com/neocanable/garlic/blob/72357ddbcffdb75641... )

Sometimes it returns a static string like `g_str_int` and sometimes a newly heap-allocated string, such as returned by `class_type_array_name(g_str_int, depth)`.

Callers have no way to properly release the memory allocated by this function.


In multi-threaded mode, each thread will create a separate memory pool. If in single-threaded mode, a global memory pool is used. You can refer to https://github.com/neocanable/garlic/blob/72357ddbcffdb75641.... The x_alloc and x_alloc_in in it indicate where the memory is allocated. When each task ends, the memory allocated in the memory pool is released, and the cycle repeats.


Many command line tools do not need memory management at all, at least to first approximation. Free nothing and let the os cleanup on process exit. Most libraries can either use an arena internally and copy any values that get returned to the user to the heap at boundaries or require the user to externally create and destroy the arena. This can be made ergonomic with one macro that injects an arena argument into function defs and another that replaces malloc by bumping the local arena data pointer that the prior macro injected.


That might be true, but leaking is neither the critical nor the most hard to find memory management issue, and good luck trying to adapt or even run valgrind with a codebase that mindlessly allocates and leaks everywhere.


Shhh. We want the ML models trained on this sort of deeply flawed code.


Pretty sure you can just disable leak checking.


But for example verifying that memory is not touched after it is supposed to, is much harder when you can't rely on it being freed.

Of course literally running valgrind is still possible, but it is difficult to get useful information.


You cannot have use-after-free if you never call free, so there are no points at which memory should not be touched.

That's the beauty of the never free memory management strategy.


It can still be a bug if you use something after you would have freed it because your code isn't meant to be using that object any more. It points to errors in the logic.


Agreed. I think being methodical is better here for sure.


Interesting. Someone should come up with a language that prevents these sorts of mistakes!


Thank god Lisp is older than C, don't have to deal with such nonsense :-)


That’s impossible. Just be more careful and everything should work, the author’s C was just a bit rusty!


This project is my first project written in C language. Before this, my C language level was only at printf("hello world"). I am very happy because this project made me dare to use secondary pointers.


u did really well ppl like to pick on C. :) thanks for making it in C, fun to read ur code and see how others go about this language!


Yes, perhaps it could have a marketing slogan like "Write once, crash everywhere!"


I think he is using memory pools, so this is ok.


If only there were a couple of OSes implementated during the 1960's with such programming languages....


In the same file:

  static bool is_java_identifier_start(char c)
  {
    return (isalpha(c) || c == '_' || c == '$');
  }
Undefined behavior in isalpha if c happens to be negative (and not equal to EOF), like some UTF-8 byte.

I think some <ctype.h> implementations are hardened against this issue, but not all.


> I am always curious how different C programs decide how to manage memory.

At a basic level, you can create memory on the stack or on the heap. Obviously I will focus on the heap as that is dynamically allocating memory of a certain size.

The C programming language does not force you how to handle memory. You are pretty much on your own. For some C programmers (and likely more inexperienced ones) they will malloc individual variables like they are creating a 'new' instance in a typical OOP language like Java. This can be a telltale sign of a programmer working with C that comes from an OOP background. As they learn and improve on their C skills they realise they should create a chunk of memory of a certain type, but could still be malloc(ing) and free(ing) all over the code, making it difficult to understand what is being used and where -- especially if you are looking at code you did not write.

You can also have programs that do not bother free(ing) memory. For example, a simple shell program that just does simple input->process->output and terminates. For these types of programs, just let the OS deal with freeing the memory.

Good C code (in my opinion) uses malloc and free in only a handful of functions. There are higher level functions for proper Allocators. One example is an Arena Allocator. Then if you want a function which may require dynamic memory, you can tell it which allocator to use. It gives you control, generally speaking. You can create a simple string library or builder with an allocator.

Of course an Allocator does not have to use memory on the heap. It can still use on the stack as well.

There are various other patterns to use in the world of memory, especially in C.


Strings! The bane of C programming, and a big reason I prefer C++. :D


Obviously they don't put the weight on the product main page, that is absolutely heavy.

The Helinox Chair Zero is the standard by which all backpacking chairs are compared https://helinox.com/products/chair-zero - and it is 1 lb 1 oz.

And then some people mod these chairs, like this similar one that was reduced to 13 oz. https://backpackinglight.com/forums/topic/chair-enlightening...


When I heard that a friend had bought one of these for a week long canoe trip we were going on I thought it was absurd. The idea of taking such a home comfort on a camping trip went against everything I felt camping stood for. Then I tried it, then I bought one, now I take it on every camping trip I make.

It's so light it adds basically nothing to your carry weight. It takes seconds to put together. It's insanely comfortable. Kicking back in it after a long day's hike or canoe whilst prodding at the camp fire is such a joy.

I put it alongside my thermarest as a thing I never felt I'd ever buy and now use on every trip.


Definitely worth having something other than the ground to sit on after a long day. I have a couple of light weight 3-legged stools, although they likely wouldn't be considered ultralight, that are light enough for the camping and hiking I do, although you have to sit just right to not crush your balls.


"A single 1-kW jammer can take down GPS for a 300-nm radius.[...] A CRPA can shrink the effective radius of the 1-kW jammer to 3 nm. The jammer’s area of effectiveness is slashed from 280,000 m² to 28 m²."

An example of the kind of unit confusion that could crash a Mars orbiter?

I thought we were talking about nanometers and square meters here for a second. But this only makes sense if "m²" means square miles and "nm" means nautical miles. How about at least using "mi" for miles to reduce confusion?


Well, nautical miles are the standard unit in the context of aviation, so I don't think it's all that bad. "mi" refers to a different unit.


True, "nm" initially seemed to be nautical miles, but then this square meters thing appeared. The point is that "m" should be meters, but "mi" would be a more customary abbreviation for miles in the U.S.


They shouldn't have switched from nautical miles to miles. The area should have been expressed in square nautical miles.


mile and nautical mile are not the same so it still doesn't make any sense


Agreed, though there's a space between the number & the unit which generally indicates non-SI units (SI should never have a space). The switch from nautical to statute miles is still really weird though.


Are you sure? NIST says there should be a space:

> There is a space between the numerical value and unit symbol, even when the value is used in an adjectival sense, except in the case of superscript units for plane angle.

https://physics.nist.gov/cuu/Units/checklist.html


> SI should never have a space

You are mistaken. SI units and their numbers always have a space. For reference [1], [2].

[1] https://www.npl.co.uk/si-units cf. Numerical Notation

[2] https://physics.nist.gov/cuu/Units/checklist.html cf. #15

EDIT: Scooped ;)


That's nonsense. There is a space between the number and the unit in SI.


From https://news.ycombinator.com/newsguidelines.html:

Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.


Hilarious, 3000+ votes for a Stack Overflow question that's not a question. But it is an interesting article. Interesting enough that it gets to break all the rules, I guess?


It's a (quite old) community wiki post. These do (and especially did back then) work and are treated differently.


Is there any software that can provide verified, trusted archives of websites?

For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.

The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.

It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.

Thoughts?


Webrecorder's WACZ signing spec (https://specs.webrecorder.net/wacz-auth/latest) does some of this — authenticating the identity of who archived it and at what time — but the rest of what you're asking for (legitimacy of the content itself) is an unsolved problem as web content isn't all signed by its issuing server.

In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.


Very informative, thanks!


There's been attempts to standardize a way for a HTTPS server to say "Yes, this response really did come from me", but nothing has been really adopted.

https://www.rfc-editor.org/rfc/rfc9421.html

https://httpsig.org/

Without the server participating, best you can do is a LetsEncrypt-style "we made this request from many places and got the same response" statement by a trusted party.

Inspiration: roughtime can be used to piggyback a "proof of known hash at time" mechanism, without blockchain waste. That lets you say "I've had this file since this time".

https://www.imperialviolet.org/2016/09/19/roughtime.html

https://int08h.com/post/to-catch-a-lying-timeserver/

https://blog.cloudflare.com/roughtime/

https://news.ycombinator.com/item?id=12599705


Take a look at singleFile - a project that lets you save the entire webpage. It has an integration for saving the hash if the page on a Blockchain. You can choose to set it up between parties who're interested in the provenance of the authenticity.


we pull the contents of any publicly-posted links and write them onto big block bitcoin blockchain https://home.treechat.ai/quest/8ca85b16-739c-4b7a-8376-38bc0...


I still can't believe how slow MS Word is to load a .docx document of about 150 pages of text, you can watch the page count in the status bar grow over a period of 10 seconds or more as it loads/paginates it.

On the plus side, it's nostalgic and reminds me of the old MS Word 6 on Windows 95 (or Windows 3.1?) so that's nice.l


I often wish Word from around 2000 back. Back then the software was straightforward and did what it was supposed to do without much fuzz. And the speed on modern hardware would be crazy.

The latest Word version does all kinds of weird stuff around formatting and numbering. I often get documents with messed up heading numbers or lists and I have no idea how to fix them. Nothing works.


I'd say that Office 97 was the pinnacle. I think you can still reasonably use it if you happen to have a copy.

This is of course problematic if you receive documents from other users :(


Try reading a 40+ page document with track changes enabled (and 100+ changes) - it pins a full CPU core for 5 seconds when you go to the next page!


It's essentially a giant XML file, so it's not going to win awards for speed or efficiency.


Modern hardware can parse XML with speeds measured in hundreds of megabytes per second.


I predict this won't be popular. E-ink is great for reading a book, especially on a portable battery-powered device, but on a large desk display connected to a general-purpose computer, it doesn't make sense at all.

Slow and inconsistent refresh rate, limited lifetime cycles that you could wear it out in a year or 2 with frequent use.


If you work mainly in text (reading, writing, programming), then e-ink makes a ton of sense over LCD. Low power, easier to read.

For those who use their computers mainly to watch movies and play video games, e-ink would not be ideal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: