What more do you want an OS to do? What do you expect Apple to add to MacOS? Just don't break stuff. Fix bugs. Once in a while some new kind of hardware or network protocol might need to be added. That's it.
Sometimes it returns a static string like `g_str_int` and sometimes a newly heap-allocated string, such as returned by `class_type_array_name(g_str_int, depth)`.
Callers have no way to properly release the memory allocated by this function.
In multi-threaded mode, each thread will create a separate memory pool. If in single-threaded mode, a global memory pool is used. You can refer to https://github.com/neocanable/garlic/blob/72357ddbcffdb75641.... The x_alloc and x_alloc_in in it indicate where the memory is allocated. When each task ends, the memory allocated in the memory pool is released, and the cycle repeats.
Many command line tools do not need memory management at all, at least to first approximation. Free nothing and let the os cleanup on process exit. Most libraries can either use an arena internally and copy any values that get returned to the user to the heap at boundaries or require the user to externally create and destroy the arena. This can be made ergonomic with one macro that injects an arena argument into function defs and another that replaces malloc by bumping the local arena data pointer that the prior macro injected.
That might be true, but leaking is neither the critical nor the most hard to find memory management issue, and good luck trying to adapt or even run valgrind with a codebase that mindlessly allocates and leaks everywhere.
It can still be a bug if you use something after you would have freed it because your code isn't meant to be using that object any more. It points to errors in the logic.
This project is my first project written in C language. Before this, my C language level was only at printf("hello world"). I am very happy because this project made me dare to use secondary pointers.
> I am always curious how different C programs decide how to manage memory.
At a basic level, you can create memory on the stack or on the heap. Obviously I will focus on the heap as that is dynamically allocating memory of a certain size.
The C programming language does not force you how to handle memory. You are pretty much on your own. For some C programmers (and likely more inexperienced ones) they will malloc individual variables like they are creating a 'new' instance in a typical OOP language like Java. This can be a telltale sign of a programmer working with C that comes from an OOP background. As they learn and improve on their C skills they realise they should create a chunk of memory of a certain type, but could still be malloc(ing) and free(ing) all over the code, making it difficult to understand what is being used and where -- especially if you are looking at code you did not write.
You can also have programs that do not bother free(ing) memory. For example, a simple shell program that just does simple input->process->output and terminates. For these types of programs, just let the OS deal with freeing the memory.
Good C code (in my opinion) uses malloc and free in only a handful of functions. There are higher level functions for proper Allocators. One example is an Arena Allocator. Then if you want a function which may require dynamic memory, you can tell it which allocator to use. It gives you control, generally speaking. You can create a simple string library or builder with an allocator.
Of course an Allocator does not have to use memory on the heap. It can still use on the stack as well.
There are various other patterns to use in the world of memory, especially in C.
When I heard that a friend had bought one of these for a week long canoe trip we were going on I thought it was absurd. The idea of taking such a home comfort on a camping trip went against everything I felt camping stood for. Then I tried it, then I bought one, now I take it on every camping trip I make.
It's so light it adds basically nothing to your carry weight. It takes seconds to put together. It's insanely comfortable. Kicking back in it after a long day's hike or canoe whilst prodding at the camp fire is such a joy.
I put it alongside my thermarest as a thing I never felt I'd ever buy and now use on every trip.
Definitely worth having something other than the ground to sit on after a long day. I have a couple of light weight 3-legged stools, although they likely wouldn't be considered ultralight, that are light enough for the camping and hiking I do, although you have to sit just right to not crush your balls.
"A single 1-kW jammer can take down GPS for a 300-nm radius.[...] A CRPA can shrink the effective radius of the 1-kW jammer to 3 nm. The jammer’s area of effectiveness is slashed from 280,000 m² to 28 m²."
An example of the kind of unit confusion that could crash a Mars orbiter?
I thought we were talking about nanometers and square meters here for a second. But this only makes sense if "m²" means square miles and "nm" means nautical miles. How about at least using "mi" for miles to reduce confusion?
True, "nm" initially seemed to be nautical miles, but then this square meters thing appeared. The point is that "m" should be meters, but "mi" would be a more customary abbreviation for miles in the U.S.
Agreed, though there's a space between the number & the unit which generally indicates non-SI units (SI should never have a space). The switch from nautical to statute miles is still really weird though.
> There is a space between the numerical value and unit symbol, even when the value is used in an adjectival sense, except in the case of superscript units for plane angle.
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Hilarious, 3000+ votes for a Stack Overflow question that's not a question. But it is an interesting article. Interesting enough that it gets to break all the rules, I guess?
Is there any software that can provide verified, trusted archives of websites?
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Webrecorder's WACZ signing spec (https://specs.webrecorder.net/wacz-auth/latest) does some of this — authenticating the identity of who archived it and at what time — but the rest of what you're asking for (legitimacy of the content itself) is an unsolved problem as web content isn't all signed by its issuing server.
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
There's been attempts to standardize a way for a HTTPS server to say "Yes, this response really did come from me", but nothing has been really adopted.
Without the server participating, best you can do is a LetsEncrypt-style "we made this request from many places and got the same response" statement by a trusted party.
Inspiration: roughtime can be used to piggyback a "proof of known hash at time" mechanism, without blockchain waste. That lets you say "I've had this file since this time".
Take a look at singleFile - a project that lets you save the entire webpage. It has an integration for saving the hash if the page on a Blockchain. You can choose to set it up between parties who're interested in the provenance of the authenticity.
I still can't believe how slow MS Word is to load a .docx document of about 150 pages of text, you can watch the page count in the status bar grow over a period of 10 seconds or more as it loads/paginates it.
On the plus side, it's nostalgic and reminds me of the old MS Word 6 on Windows 95 (or Windows 3.1?) so that's nice.l
I often wish Word from around 2000 back. Back then the software was straightforward and did what it was supposed to do without much fuzz. And the speed on modern hardware would be crazy.
The latest Word version does all kinds of weird stuff around formatting and numbering. I often get documents with messed up heading numbers or lists and I have no idea how to fix them. Nothing works.
I predict this won't be popular. E-ink is great for reading a book, especially on a portable battery-powered device, but on a large desk display connected to a general-purpose computer, it doesn't make sense at all.
Slow and inconsistent refresh rate, limited lifetime cycles that you could wear it out in a year or 2 with frequent use.
reply