unlocked ring? the ring buffers are lock free. I'm not sure it stands for userspace since the ring buffers are shared between kernel and userspace, thats the core of io_uring, so that seems kind of misleading.
Ring buffers for the most part are typically lock free, so that’s a bit redundant. The special part is that the ring is directly accessible from userspace at all, as previously any ring buffer is internal.
The letter u in contexts like these is usually used to represent the greek letter mu -- μ -- and like the SI units prefix indicates "micro". It might not be the case for io_uring, but that's what I'd assumed.
The "u" in "uring" has nothing to do with its size but rather with it being a "userspace ring". The ring buffers used by this particular API reside in userspace rather than in kernel memory.
> The ring buffers used by this particular API reside in userspace rather than in kernel memory.
More specifically, the physical pages backing these ring buffers are pinned and mapped into both address spaces. (The kernel needs to be able to read submissions (sqes) off of it and write completions (cqes) into it, and could not do that if it were only mapped in userspace, or could be paged out.)
This is correct, the "u" is for userspace/usermapped. The memory is actually allocate by the kernel, but it's mapped into the applications address space.
Recently we've started using io_uring for disk access in QuestDB. So far, it's being used in CSV import, but we'd like to expand it to network and other disk access use cases. Apart from the performance boost, the beauty of io_uring is that it allows one to build an event loop on a single I/O multiplexing mechanism. No need to build an ugly combination of epoll and AIO or anything like that - it supports networking, disk access, user events (eventfd), and timers (and not only).
Thank you for highlighting that, part of the design criteria for io_uring was indeed to be able to do everything. It's even mentioned in the second sentence of the linked write-up, no more "everything is a file... until you need to do IO to it".
"Fortunately, future work may make synchronization nonpinning. And, refactoring internals of the java.io package and implementing OS-level APIs such as io_uring on Linux may reduce the number of capturing operations."
for (i = 0; i < BUFS_IN_GROUP; i++) {
/* add each buffer, we'll use i buffer ID */
io_uring_buf_ring_add(br, bufs[i], BUF_SIZE, i,
io_uring_buf_ring_mask(BUFS_IN_GROUP), i);
}
Where did bufs come from? Should that be br.bufs[i]?
The buffers are supplied elsewhere outside that example, just consider it an array of pointers to buffers of size BUF_SIZE. br->bufs[] is the shared space, it holds information on the address/size/id of a given buffer.
could anyone suggest where to learn enough network programming in C/Linux to be able to have the necessary background to start using io_uring?
After working on one of our internal pieces of software, I have a strong hunch that io_uring could boost its throughput, and I'd like to have a PoC, but low-level network programming is definitely outside my areas of expertise.
As someone who has written a fair amount of networking code in C, I'd assert that without an understanding of what's in Beej's guide, it will be nearly impossible to competently use io_uring in a networking context.
one thing is not clear from the website: the pages in the content section (say https://build-your-own.org/redis/02_intro_sockets ) are a shortened version of the pages in the epub/pdf/paperback version?
imho, the best resource for this is unix-network-programming from stevens. i would recommend that you start with his tcp-ip-illustrated-vol-1 first, and then progress from there.
other resources are more of a 'tactical' nature, while what you need is something more strategic.
Find a tutorial that explains how to implement an async runtime on top of epoll using C++20 coroutines. In that context, it is probably easier to use io_uring correctly than epoll.