Hacker Newsnew | past | comments | ask | show | jobs | submit | BowBun's commentslogin

Then go bet on it! Why are you telling us?

To fake out the markets

that's Pile!

> Who is forced to use it? Just use X11, as you said (many times) you do already.

This is my understanding of his actual concern - Linux corps are pushing Wayland as a replacement for X11 when it is full of issues.

Anecdotally my experience was the same. I'm a dev so I'm fine in a terminal, but trying to switch to KDE actually sent me BACK to Windows. Basic windowing stuff just does not work, and like the OP says, tons of stutters and crashes for a simple 2-monitor setup. Even something as simple as alt-tabbing lagged for seconds on an overpowered machine. Just does not feel like polished software which is a huge reputational risk for Linux right now.


Anecdotally, my experience with Wayland has been a lot better than with X11. I have been on Wayland for years, I can't remember the last time I had an issue (running Sway).

Exactly. Ubuntu LTS 24 and Intel integrated GPU + Wayland is zero problems even when running 4k120 and 150% scaled resolution. Chrome / vscode / zed / Rstudio / Youtube 4k60, it just works.

Edit this is running a 32" 2160p120 (4k) monitor alongside a 24" 1080p144 monitor.


Same here, there are some pain points with swaywm (notably screen sharing is only per display, DisplayLink support and screen mirroring is a pain). Most of these points however are IME a worthwhile tradeoff. Sway has also been astoundingly stable (compared to gnome or KDE)


Thank you for pointing that out, I'm looking forward to this release a lot then

> I can't remember the last time I had an issue

Depending on your workflows the comment just described three issues


Kde has been stable too So much more than x11 kde

Same here. Wayland has been fine. (Hyprland)

Me too. I just swapped waybar etc for the dank shell etc and love the setup. It feels polished now

I suspect part of that is the Xorg maintainers (who are also behind Wayland efforts) are actively trying to kill it and make it as unbearable as possible

I'm still using Xorg after all these years, on a laptop with 150% scaling, which I occasionally plug into an external monitor with 100% scaling. Somewhat surprisingly, it works great. (Cinnamon desktop, Ryzen 7840u integrated graphics. And also a desktop machine with Radeon RX 6800XT, but it's not surprising that still works great.)

It's all open-source. If you think the maintainers are trying to sabotage the codebase, you have the freedom to fork it.

I don’t get all of what’s going on but from the outside it seems like the xLibre guys got a lot of negative attention for doing that.

If you don't know what's going on, why comment?

A guy decided that after getting all his patches rejected because they cause tests to fail, doesn't compile, etc. that the problem is everyone else and decided to fork XOrg.

He then announced that the problem wasn't his code that didn't compile but DEI so based the entire forking around being a political conservative.

Everything I've seen written by him shows him to be insufferable, thats where the negative attention comes from.


There are a lot of distros that have xlibre packages for something that ostensibly doesn't compile.

I wouldn't trust the reason given by the people who have said that they're trying to kill Xorg for why they're rejecting patches from someone trying to improve Xorg


> There are a lot of distros that have xlibre packages for something that ostensibly doesn't compile.

No one says xlibre doesn't compile, but good attempt at a distraction. Have you considered invading a country as an alternative way to distract from terrible views?


> No one says xlibre doesn't compile

>> A guy decided that after getting all his patches rejected because they cause tests to fail, doesn't compile, etc. that the problem is everyone else and decided to fork XOrg.

Emphasis mine, words yours.


Yeah, some submitted patches failed to compile. Others compiled and failed tests.

Not the same as XLibre doesn't compile.


Wow here it shows who's politically motivated and like it or not Xlibre probably felt the same way. Some people cannot sleep or chill if it is not theirs world view.

"... generic human experiment ... creates a new humanoid race ... toxic spike protein ..." - Enrico Weigelt on LKML

"... insane and technically incorrect ... idiotic lies ... you don't know what you are talking about ... SHUT THE HELL UP ..." - Linus Torvalds

https://lkml.org/lkml/2021/6/10/957

It's not just his code.

The COVID conspiracy theories Enrico Weigelt pushes are riddled with bugs, logical errors, and security holes, and don't compile or pass tests either.

Linus already reviewed both the code and the reasoning, and rejected them for failing basic correctness.


This is indeed not at all about his code. I don't care what he thinks of vaccines and COVID - I just as much don't care what Linus Torvalds thinks about these things. They are damn programmers. Their business is to ship reliable, usable, secure software.

"If you don't like the direction of a multi-decade-long, hundreds of manyears, deeply esoteric project, you have the freedom to go in, fork it, and maintain it"

is the most technically true, practically meaningless argument in FOSS


But it happens successfully.

The code base is Xorg rather than Xfree86 because of one such fork.

Gcc went through the egcs fork.

OpenOffice became LibreOffice in a fork.

When leadership of a project fails to keep the volunteers behind them such forks happen.


And? I'm tired of thoughtless drive-by comments pointing out problems with a given solution without proposing any alternatives, which tends to be a tacit admission that there is no better solution. If you think you have a better solution, let's hear it:

>I'm tired of thoughtless drive-by comments pointing out problems with a given solution

And? Fortunately, free speech and criticism doesn't stop when someone is tired of hearing it.

The alternative doesn't need to be some new solution. A course reversal or change on the existing ones is enough. In which case the criticism already highlights the solution. Besides, the first step of fixing a problem is identifying it.


> If you think the maintainers are trying to sabotage the codebase, you have the freedom to fork it.

But do you have the skill to actually maintain that fork? Do you have the time to keep it going?


Sucks for you, but you can't then turn around and expect someone else to invest these for you when they don't want to.

We all need to decide where to spend our efforts. If you decide that maintaining a fork isn't worth your time, then that's a revelation of your own preferences.

What a flippant cliché. Why don't you put as much time and effort and thought into your comments and money into supporting open source developers as you demand other people to put into forking code bases and rearchitecting enormous monolithic socially and economically entrenched pieces of software without getting paid for their time?

If you're going to criticize, then at least make some constructive comments about how you think they SHOULD do it instead of just telling them to fork off.

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

https://donhopkins.com/home/archive/NeWS/uwm.extensions.txt

  Date: Mon, 23 Feb 87 18:31:00 EST
  From: Don Hopkins <brillig.umd.edu!don@harvard>
  To: cartan!weyl.Berkeley.EDU!rusty@ucbvax.berkeley.edu
  Cc: xpert@athena.mit.edu
  Subject:  Uwm extensions, perhaps?
[...] I see just the same problem with XToolKit. I would like to see the ToolKit as a client that you would normally run on the same machine as the server, for speed. Interactive widgets would be much more interactive, you wouldn't have to have a copy of the whole library in every client, and there would be just one client to configure. The big question is how do your clients communicate with it? Are the facilities in X11 sufficient? Or would it be a good idea to adopt some other standard for communication between clients? At the X conference, it was said that the X11 server should be used by clients to rendezvous with each other, but not as a primary means of communication. Why is that?

Setting a standard on any kind of key or mouse bindings would be evil. The window manager should be as transparent as possible. It solves lots of problems for it to be able to send any event to the clients. For example, how about function to quote an event that the window manager would normally intercept, and send it on?

Perhaps the window manager is the place to put the ToolKit?

-Don

https://groups.google.com/g/comp.windows.x/c/qJO5IgI_7HU/m/J...

On September 19, 1989, Don Hopkins wrote on xpert@athena:

[...] I think it's a pretty good idea to have the window manager, or some other process running close to the server, handle all the menus. Window managment and menu managment are separate functions, but it would be a real performance win for the window and the menu manager to reside in the same process. There should be options to deactivate either type of managment, so you could run, say, a motif window manager, and an open look menu manager at the same time. But I think that in most cases you'd want the uniform user interface, and the better performance, that you'd get by having both in one process. I think it would be possible to implement something like this with the NDE window manager in X11/NeWS. It's written in object oriented PostScript, based on the tNt toolkit, and runs as a light weight processes inside the NeWS server. This way, selecting from a menu that invokes a window managment function only involves one process (the xnews server), instead of three (the x server and the two "outboard" managers), with all the associated overhead of paging, ipc, and context switching. [...]


I second this. I had issues years ago, but those have mostly been fixed.

Additionally, the Steam Deck ships with Wayland by default. Hundreds of thousands of gamers are stress-testing it without any complaint that I'm aware of.

Games isn't exactly the best stress test for a windowing system. Most (if not all) run in full-screen mode and don't really use it much after the launch. And that's not what desktop computing is about. You want to run multiple programs, you want them to integrate with each other. But games don't need any of this.

> you want them to integrate with each other.

Do you? Not going to lie, I'm perfectly happy just using the apps I want to use and having none of them talk to each other. 90% of my use is covered by Vivaldi (browser stuff which is most stuff these days) and Kitty (Neovim, random TUIs and utilities). The few other apps I have are Steam, Krita and Blender, which are all worlds unto themselves and have no need to integrate with anything.


I don't disagree in principle, my workflow is pretty similar, most of the integration part goes thru kitty for me too. But I also need need screenshot and screencast - which need special permission in Wayland because reasons. Fine for me, but I can imagine some having their workflow around specific tools, that might or might not work with Wayland - and apparently, Wayland team thinks it's not their problem.

And I personally need kicad - it doesn't support Wayland at all because of some mouse-related stuff. Again, Wayland team thinks it's not their problem.

Then I had a lots of issues with a graphics tablet. Yes, it's a cheap chinesium knock-off, but it does work on X11. Wayland - no chance and it's obviously not their problem.

So I dropped Wayland altogether and went to X11/herbstluftwm. Was a few years ago, but I didn't bother to go back since - why should I? These aren't their problem, but now I don't use Wayland anymore, so these aren't my problem either.

What I'm trying to say, "hundreds of thousands of gamers stress-testing" have very little to say about usability. Yes the graphics part is excellent, nobody is denying that. There is more to a WM then graphics.


There is a "Desktop mode" that I, at least, use more than the handheld mode. Not sure if it's running Wayland though.

Desktop mode on the Steam Deck uses X11. I think that's why the brightness control is fucked up (ever notice how the brightness is always 40% when you switch to desktop mode?). You can manually switch it to Wayland, but Steam input is broken under Wayland (or at least it was last time I checked, which is admittedly half a year ago or something).

> input is broken under Wayland

Drop the ‘Steam’, it's cleaner. Wayland's raison d'etre is to push frame buffers without tearing. Input is an afterthought.


> Input is an afterthought

This. You need more than graphics to have a windowing system. Wayland team threw out X11, did graphics and left the rest for the others to figure out.


That's odd, I'm using Wayland on my desktop and, for example, Japanese input works 問題ない. Then again, I haven't tried every possible input method/peripheral in existence, so I may just be the one in [some arbitrary large number] who lucked out.

Running fullscreen games on a single fixed hardware configuration isn't exactly 'stress-testing', it just tells you that a single code path works.


That post is 3 years old, so basically around 1 year into the Steam Deck's release.

And yet, Cities Skylines still (last tried: about 2 months ago) crashes for me when I try to load it in Wayland on Fedora, which has removed Xorg from its updates.

Wayland has broken dozens of my Steam games.


I just played Skylines last night via Proton-GE. AMD GPU. Fedora 43. Gnome.

CS1, right? If so, can you please detail what you might have done differently?Load options? Some package or another I might be missing? All I know is that with Xorg it worked perfectly, I upgraded Fedora, and now that I only have Wayland, whatever I was doing before no longer works. I'd be grateful for the help.

I am using a system-installed Steam, but sometimes a Steam Flatpak can help with troubleshooting, because it bundles components inside of the flatpak. Running games this way may give you a 5% performance penalty, but it's a good way to see if you have a packaging issue (other things needing installed, or misconfigured).

Also are you running th Linux native one or the proton version?

I run everything through Steam with the proton compatibility layer forced. It's a steam client option somewhere.

I think they're an app called ProtonQT or something like that. It will enable you to easily download the latest proton-ge version. Once downloaded and installed you will need to restart the steam client, then restart the steam client again after selecting the new proton-ge version as the default.

---

https://www.protondb.com/app/255710


I'll try again. I tried both native Linux and a dozen Proton builds, and none would load, even with no mods or DLC. I'll try again with the latest, and if that doesn't work, I'll try the flatpak instead of via RPM Fusion.

There is a note on protondb about needing the most recent proton-ge release. Use ProtonQT flatpak, should help you install / maintain proton-ge versions. Will get you the latest updates as they release.

The Steamdeck loads games into some kind of nested x11 renderer-in-a-window, I think. If for no other reason than to try to avoid Wayland’s extra input latency? Dunno. Maybe you lost some component it needs to work, if regular Steam also does that.

Regular steam does not do that. At this point on most hardware (minus SOME Nvidia, I think older stuff like 2XXX) you should have less latency in Wayland than x11.

I was under the impression that some of the anti-screen-tearing and other features in Wayland unavoidably set a higher (and, higher-enough to be noticeable in some contexts) floor on latency, though, because of how those features necessarily work. I don't mean drivers.

Variable Refresh Rate in Gnome. No screen tearing, no input latency hit for full screen games. To make this work correctly, I think regardless of OS, you need to cap the refresh rate to something just inside the max for the monitor. A lot of games these days have fps limiters. Let's say your monitor was 144hz. You'd want to cap fps to something like 140hz. That's going to prevent any screen jank or input latency.

It ships with Wayland, but it does almost everything with X(wayland)

Wine 9.22+ has the native Wayland backend by default. Now Xwayland is barely needed.

As someone who uses my steam deck as a workstation too, I really really wish this were fully true. The desktop is still X based, and that suuuccckkksss.

The next SteamOS release will use Wayland by default for desktop mode, too:

https://steamcommunity.com/games/1675200/announcements/detai...


The desktop sure, but the primary handheld mode uses Gamescope which is a Wayland-based session.

It's true that Gamescope is a Wayland compositor, but it does not support Wayland clients ("native Wayland programs" if you're unfamiliar with *nix terminology). In handheld mode, everything you see on your screen is an X11 client running under Xwayland. I have no interest in arguing over whether X11 or Wayland is better (I've used both and both work fine for me) but I would not use the Steam Deck to argue that Wayland is being stress tested by hundreds of thousands of gamers when none of their games are using Wayland.

Using Xwayland is using Wayland. It's entirely a moot point anyways, since Wine/Proton supports a native Wayland background these days anyhow.

There’s a gradient. For example, if I ran rootful Xwayland, ran an X11 DE inside that, and then solely used X11 software, it would technically be using Wayland but it would not be what most people are referring to when they are discussing the Wayland transition on the Linux desktop. Similarly, arguing that Wayland is being battle-tested by hundreds of thousands of gamers by citing a single-tasking gaming focused Wayland compositor that only supports X11 clients is not what most people are referring to discussing the Wayland transition either.

Wine/Proton may support a native Wayland backend but it’s not currently being used on the Steam Deck.


I've had bazzite on mine for a year and wayland by default

Funy that you mention multi-monitor since it's one of the reasons I eventually moved to Wayland. The only way to support different DPI monitors in X was to do janky scaling or even jankier multiple X servers.

I don't use KDE (or GNOME anymore) but while I had to deal with a lot of initial speedbumps a couple years ago, these days instead of a full DE, I'm using a Niri setup and it's worked out great for me.

For my laptop, I have my own monitor-detection/wl-mirror script for example that is faster and more reliable for plugging into projectors/meeting room HDMI than even my old Macs.


The funny thing about this myth is that wayland does not even try to support Mixed DPI setups, the only thing it supports is, as you put it, janky scaling. Not that X is any better in the end but at least it has the data available if any application wants to try to do correct Mixed dpi (nobody does)

http://wok.oblomov.eu/tecnologia/mixed-dpi-x11/

So in yet another case of worse is better, wayland has the reputation of supporting mixed DPI environments, but not because it has any support for actual mixed DPI but because it is better at faking it (fractional scaling).


Myth or not - it is absolutely much better on wayland. I really don't care or know how to tweak linux so i've been using straight install Fedora for years. I also have 4 screens. When Fedora switched to wayland it got much better and it keeps getting better.

Does anyone have links on how to set up multi monitor on Sway?

I use a docked ThinkPad with the lid closed and two external monitors. Here are my config bits.

  set $laptop eDP-1
  set $landscape 'Hewlett Packard HP ZR24w CNT037144C'
  set $portrait 'Hewlett Packard HP ZR24w CNT03512JN'
  bindswitch --reload --locked lid:on output $laptop disable
  bindswitch --reload --locked lid:off output $laptop enable
  
  ### Output configuration
  output $laptop bg $HOME/pictures/wallpaper/1529004448340.jpg fill
  output $landscape bg $HOME/pictures/wallpaper/1529004448340.jpg fill
  output $portrait bg $HOME/pictures/wallpaper/portrait/DYabJ0FV4AACG69.jpg fill 
  # pos args are x coords and y coords, transform is degrees of rotation counter-clockwise
  # set $portrait as left monitor and rotate it counterclockwise
  output $portrait pos 0 1200 transform 270

The default config file explains some common things you might want to do. E.g. left or right side and scaling factor.

> Even something as simple as alt-tabbing lagged for seconds on an overpowered machine.

This may not be KDE's fault; I tracked these kinds of issues down to some bad tunable defaults.

I came up with this:

    ----
    cat /etc/sysctl.d/50-usb-responsiveness.conf
    #
    # Attempt to keep large USB transfers from locking the system (kswapd0)
    #
    vm.swappiness = 1
    vm.dirty_background_ratio = 5
    vm.dirty_ratio = 5
    vm.extfrag_threshold = 1000
    vm.compaction_proactiveness = 0
    vm.vfs_cache_pressure = 200
    # FIXME? 64K too big?
    vm.page-cluster = 16
    ----
I have fast everything, NVMe SSD onboard and others in Thunderbolt 4 enclosures and 32GB of RAM on my 12th-Gen i7 with 20 (6+14) cores; there should have been no reason for any stuttering and/or Alt-Tab slowness while doing large file copies and finally got fed up, did some research and experimentation and use the above and it's not happened since.

YMMV, but it's worth a try.

(Oh, and on-topic, I've had to try Wayland (vs. X11) on my KDE desktop 'cause it seems to handle switching monitors when I go from home to work better; jury's still out if I'm keeping it)


You really only need dirty_ratio/bytes and dirty_background_ratio/bytes set to something lower than default. It also makes your progress bars show values closer to reality, especially when copying from fast to slow media.

Some distros already do set lower defaults, e.g. pop os:

https://github.com/pop-os/default-settings/blob/master_noble...

Bazzite: https://github.com/ublue-os/bazzite/blob/main/system_files/d...


> You really only need dirty_ratio/bytes and dirty_background_ratio/bytes set to something lower than default.

The vm.swappiness=1 was very necessary for me as well, and made as much difference as the dirties you'd mentioned.

I usually run Linus' master kernels (as I look for regressions in certain subsystems) and I know there's been some recent changes to the MM subsystem so this may explain some of the necessity for me.


If you don't mind me asking...are you using nVidia by chance? Have you tried something besides KDE? How long ago was this?

I've read about some terrible experiences with Wayland and I've just never had any of these problems in nearly a decade of using it almost every day (sway was a little rough around the edges in the first year it came out, but even then it fixed screen tearing, which I was never able to entirely eliminate with Xorg). The two things I've always stayed away from though is KDE, and nVidia.

I'm just trying to figure out why there's such a discrepancy between my experiences and what I read online from time to time.


Happy to share more info, I truly want Linux to be as good as the competition! I had no true technical complaints, only UX ones.

Copying from another reply: > I'm on a Radeon card, 2 monitors of different resolutions, 1xDP 1xHDMI. I check my AMD drivers and upgrade those regularly (as well as BIOS/chip firmware).

This experience was about 3 months ago, and I only tried KDE this time around. I'm not a huge Linux person, so I did some searching and it seemed that (despite some very spicy discussions), KDE was agreed to be pretty bleeding-edge for window management and all the nice features Windows users expect.

I've also noticed vastly different anecdotes, which sucks for maintainers and users alike! Maybe KDE needs a telemetry push since the desktop env specifically targets users who are less willing to troubleshoot at such a low level.


Comments like this make me feel like we are living in different worlds, I have KDE/Wayland on multi head machines with different DPIs and laptops. KDE has been the smoothest most reasonable desktops for a long time, I play games they just work, I can make zoom calls, they implemented device recovery. How are you experiencing this, are you rendering in software?

Honestly I'm a regular user on my home PC which is what I was talking about in my comment. Discord, music player, games through GeForce Now/Steam, some coding. I went through a default install, didn't really tweak much and was hoping KDE would just do its thing.

I'm on a Radeon card, 2 monitors of different resolutions, 1xDP 1xHDMI. I check my AMD drivers and upgrade those regularly (as well as BIOS/chip firmware).

I did try to do all the basic troubleshooting steps and making sure all my software was up-to-date. Just felt like the default state without tuning did not meet what Windows' defaults give me.


This seems more like a KDE thing then a Wayland thing. At least for me on GNOME Wayland is strictly better. And the newer Wayland-only desktops like Niri are arguably better then that.

I've had an interesting experience with creating a wayland compatability layer with Bitwig. Especially as I used Niri as the tiling window manager, it is even harder to use as a base as it less supportive of X11 compared to other WMs like hyprland.

This may be Niche, but DAWs are very rare to support linux, especially this stack. I would say it might be a stretch to say the company behind Bitwig is punishing Wayland users, I am sure they don't have the personnel for it, but it is a legitimate issue that companies will most likely be 10 years late to the new modernization into Wayland.

Anyways, I was able to configure it with a specific flake configuration. I had issues with third party windows, which was more of an issue with the floating nature of Niri, since Gnome with Wayland displayed external VSTs fine.

You can find my repository here if interested. It consists of a few files, and I made it easier to use with justfiles. https://github.com/ArikRahman/Nixwig


Pretty much every vst, clap, etc plugin on Linux requires X calls because of how windows get created and then managed by the host.

I've moved to running Bitwig in an Ubuntu distrobox container. Hope you're enjoying 6, it seems they fixed a lot with the piano roll.

I had to set mouse warping off in my tiling manager for yabridge/wine plugins.


I assume you, a technical person, made sure to help the people giving you the software for free to diagnose what is obviously one or more bugs?

I did not. While I am open to switching to Linux desktop for my primary PC, I'm not open to troubleshooting that level of software in my free time. As a FOSS maintainer I have the utmost respect and admiration for the effort and work done so far. My anecdote was not meant to discount the work, but to highlight friction points which prevented me from switching over.

> Even something as simple as alt-tabbing lagged for seconds on an overpowered machine.

In what way? If there’s a delay for the task switching menu to close after alt-tabbing (~500ms) this might be due to a kde animation default (it really tripped me up, I’m a rapid window switcher). I can share the fix once I get on my kde machine.


The specific thing I noticed was that, when pressing alt-tab the very first time, there was a noticeable, variable delay for the actual window previews to appear. I don't care as much about the window previews, but the delay in feedback to know which window I was potentially focusing on was painful. I also alt-tab like a madman, for better or for worse. Testing this now on Windows, after I press tab there is an _immediate_ rendering of the previews, 0 (noticeable) delay. On KDE, I'm not exaggerating, I could count 1-2 seconds until I saw something on screen, even when my machine was mostly idle (Discord+music player running).

Does disabling “fade popups in and out” help? Settings -> Animations

Wayland is so much better than x11. Sure there might be bugs in wayland which are not in x11, but in geberal wayland is better.

> trying to switch to KDE actually sent me BACK to Windows.

....uh, why not use Cinnamon or MATE or Gnome or XFCE?

Conflating KDE with desktop Linux is strange

I say this as someone who suffered the same problems trying to use KDE (frequent windowing freezing requiring logout) and just swapped to Cinnamon. It's two mousebutton clicks at the login.


As someone who was trying to switch to Linux desktop for the first time, I didn't research all the many options available and didn't try multiple ones. I read that KDE was kind of 'bleeding edge' (I know this is a hot topic) and so that's the one I decided to give a try. I tried to get used to the quirks for a few weeks rather than switch to something else.

as much as i dislike m$, at least windows works and it works for games and graphics. when i need text or computation without a ui, i use linux. similar to the argument in the article about use what works, i use what works.

I got a gaming computer during covid and initially ran Windows on it. It had so many problems with the audio and random crashes I eventually gave up and switched to Linux. Only loss was the newer Blizzard games, all the Steam games worked.

Idk - I’ve been trying out Linux gaming on Bazzite and everything seems to just work? It’s been a basically flawless experience.

I dual boot Linux + Windows (technically triple boot - I have a third drive with a different distro for dev work) and I haven’t needed to boot to windows a single time in the ~5 months I’ve been testing out Linux gaming. Not a single game has required any tweaking with proton settings either. My plan is to remove Windows entirely if I make it through the year without needing it.

No issues with drivers, no issues with peripherals (Wired speakers, Bluetooth headset, usb headset, webcam), with 3 displays at different resolutions/framerates/orientations. Running Ryzen 9800x3d and an RTX 4070ti.

Games I’ve played on Linux cover a pretty wide spectrum too.

- Arc Raiders

- Stalker 2

- Kingdom come deliverance 2

- Doom the dark ages

- Timberborn

- Pacific Drive

- Baulders Gate 3

- Disco Elysium

- Peak

- Alan Wake 2

- RV there yet

- Yapyap

- Pentinence

and probably others I’m forgetting.

I honestly wasn’t expecting the experience to be this smooth. Windows days as the gaming default feel numbered.


ok im going to try linux again lol, hopefully this time it sticks. just need to find an image program that resembles paint as closely as possible.

Yeah much as it sucks, i went back to Windows+WSL on my laptop. It just straight up works better. I really wish it didn't, but it's the reality.

> at least windows works and it works for games and graphics

It doesn't, actually. I vividly remember trying and failing to play some old games on Windows. GTA San Andreas, I think. Didn't even launch due to missing DirectX libraries or whatever. I hunted down and installed all the redistributables and DLLs. Still didn't run.

So much for the fabled backwards compatibility of Windows. Microsoft clearly does not give a shit anymore. Wouldn't be surprised if Linux with Proton becomes better at running games than Windows one day.


In 2008, I remember playing starcraft over LAN with my roommate. It played better on Wine/Ubuntu than it did on his Vista machine (and unrelatedly but hilariously, in the middle of the game his computer gave him a countdown to reboot with no option to cancel it)

This is classic engineering missing the forest for the trees.

The answer to your question is: same as we always did before! Do you talk to friends? Colleagues? Family? You definitely chat with us here on HN. All of these people share things with you constantly.

There's a funny obsession in tech circles to gather all the information they can as quick as possible. I much prefer to optimize for the quality of information I'm ingesting.


> The answer to your question is: same as we always did before! Do you talk to friends? Colleagues? Family? You definitely chat with us here on HN. All of these people share things with you constantly.

So, in your opinion, we can cut out the reliance on third parties by relying on third parties?


By posting comments on this site, you are relinquishing your right to that content. It belongs to YC and it is theirs to enforce, not yours. https://www.ycombinator.com/legal/

There is no such thing under https://news.ycombinator.com/ when you create your user.

Max Schrems would like a word

Is this legal advice?

Do you want it to be? I think it's safe to assume that most comments are _not_ legal advice.

We have LLMs and links to TOS, this is easily answerable by _anyone_ on the internet at this point.

Comments+posts are defined as user generated content, you have no right to its privacy/control in any capacity once you post it - https://www.ycombinator.com/legal/

YC in theory has the right to go after unauthorized 3rd parties scraping this data. YC funds startups and is deeply vested in the AI space. Why on Earth would they do that.


the implication was that training a model doesn't seem to abide by the TOS

I feel like the difference is minimal, if not entirely dismissable. Code in this sense is just a representation of the same information as someone would write in an .md file. The resolution changes, and that's where both detail and context are lost.

I'm not against TDD or verification-first development, but I don't think writing that as code is the end-goal. I'll concede that there's millions of lines of tests that already exist, so we should be using those as a foundation while everything else catches up.


Tests (and type-checkers, linters, formal specs, etc.) ground the model in reality: they show it that it got something wrong (without needing a human in the loop). It's empiricism, "nullius in verba"; the scientific approach, which lead to remarkable advances in a few hundred years; that over a thousand years of ungrounded philosophy couldn't achieve.

The scientific approach is not only or primarily empiricism. We didn't test our way to understanding. The scientific approach starts with a theory that does it's best to explain some phenomenon. Then the theory is criticized by experts. Finally, if it seems to be a promising theory tests are constructed. The tests can help verify the theory but it is the theory that provides the explanation which is the important part. Once we have explanation then we have understanding which allows us to play around with the model to come up with new things, diagnose problems etc.

The scientific approach is theory driven, not test driven. Understanding (and the power that gives us) is the goal.


> The scientific approach starts with a theory that does it's best to explain some phenomenon

At the risk of stretching the analogy, the LLM's internal representation is that theory: gradient-descent has tried to "explain" its input corpus (+ RL fine-tuning), which will likely contain relevant source code, documentation, papers, etc. to our problem.

I'd also say that a piece of software is a theory too (quite literally, if we follow Curry-Howard). A piece of software generated by an LLM is a more-specific, more-explicit subset of its internal NN model.

Tests, and other real CLI interactions, allow the model to find out that it's wrong (~empiricism); compared to going round and round in chain-of-thought (~philosophy).

Of course, test failures don't tell us how to make it actually pass; the same way that unexpected experimental/observational results don't tell us what an appropriate explanation/theory should be (see: Dark matter, dark energy, etc.!)


The ai is just pattern matching. Vibing is not understanding, whether done by humans or machines. Vibe programmers (of which there are many) make a mess of the codebase piling on patch after patch. But they get the tests to pass!

Vibing gives you something like the geocentric model of the solar system. It kind of works but but it's much more complicated and hard to work with.


Nice analogy *

I guess the current wave is going to give us Sofware Development Epicycles (SDEC?)

* All analogies are "wrong", some analogies are useful


The theory still emanated from actual observations, didn't it ?

It did but they were meaningless without a human intellect trying to make sense of them.

No, the theory comes from the authors knowledge, culture and inclinations, not from the fact.

Obviously the author has to do much work in selecting the correct bits from this baggage to get a structure that makes useful predictions, that is to say predictions that reproduces observable facts. But ultimately the theory comes from the author, not from the facts, it would be hard to imagine how one can come up with a theory that doesn't fit all the facts known to an author if the theory truly "emanated" from the facts in any sense strict enough to matter.


It most certainly is not. All your tests are doing is seeding the context with tokens that increase the probability of tokens related to solving the problem being selected next. One small problem: if the dataset doesn't have sufficiently well-represented answers to the specific problem, no amount of finessing the probability of token selection is going to lead to LLMs solving the problem. The scientific method is grounded in the ability to reason, not probabilistically retrieve random words that are statistically highly correlated with appearing near other words.

This only holds if you understand what's in the tests, and the tests are realistic. The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.

> The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.

I disagree. Having tests (even if the LLM wrote them itself!) gives the model some grounding, and exposes some of its inconsistencies. LLMs are not logically-omniscient; they can "change their minds" (next-token probabilities) when confronted with evidence (e.g. test failure messages). Chain-of-thought allows more computation to happen; but it doesn't give the model any extra evidence (i.e. Shannon information; outcomes that are surprising, given its prior probabilities).


I disagree to some degree. Tests have value even beyond whether they test the right thing. At the very least they show something worked and now doesnt work or vice versa. That has value in itself.

This assumes that tests are realistic, which for the most part they are not.

Say you describe your kitchen as “I want a kitchen” - where are the knives? Where’s the stove? Answer: you abdicated control over those details, so it’s wherever the stochastic parrot decided to put them, which may or may not be where they ended up last time you pulled your LLM generate-me-a-kitchen lever. And it may not be where you want.

Don’t like the layout? Let’s reroll! Back to the generative kitchen agent for a new one! ($$$)

The big labs will gladly let you reroll until you’re happy. But software - and kitchens - should not be generated in a casino.

A finished software product - like a working kitchen - is a fractal collection of tiny details. Keeping your finished software from falling apart under its own weight means upholding as many of those details as possible.

Like a good kitchen a few differences are all that stands between software that works and software that’s hell. In software the probability that an agent will get 100% of the details right is very very small.

Details matter.


If it is fast enough, and cheap enough, people would very happily reroll specific subsets of decisions until happy, and then lock that down. And specify in more details the corner cases that it doesn't get just how you want it.

People metaphorically do that all the time when designing rooms, in the form of endless browsing of magazines or Tik Tok or similar to find something they like instead of starting from first principles and designing exactly what they want, because usually they don't know exactly what they want.

A lot of the time we'd be happier with a spec at the end of the process than at the beginning. A spec that ensures the current understanding of what is intentional vs. what is an accident we haven't addressed yet is nailed down would be valuable. Locking it all down at the start, on the other hand, is often impossible and/or inadvisable.


Agreed; often you don’t know quite what you want until you’ve seen it.

Spec is an overloaded term in software :) because there are design specs (the plan, alternatives considered etc) and engineering style specs (imagine creating a document with enough detail that someone overseas could write your documentation from it while you’re building it)

Those need distinct names or we are all at risk of talking past each other :)


Truly the Beats of our generation. Buy headphones from audio specialists, not Apple.

Apple knows its cookies when it comes to sound. I can say that as someone who uses proper HiFi systems and has played in orchestras.

Apple's R&D budget for a single product is larger than the entire audiophile industry, and they also actually know how to manufacture things.

Apple headphones have better audio fidelity than you are giving them credit for. I have several different pairs of high-end studio headphones and expensive amps to drive them. The Apple Max, which I also own, frankly provides a cleaner reference than some of the classics. They are perfectly usable as reference headphones.

The built-in Apple audio DSP, amps, etc have surprisingly good fidelity. Much higher quality than you would expect from consumer hardware. They even provide high-impedance headphone jacks on their recent computers.


If you have multiple apple devices, the seamless switching is the killer feature, for me anyway. My Bose headphones were absolutely abysmal at this, and it drove me mad.

My Pixel buds from Google have been pretty good since they support Bluetooth multipoint. I'd love to find some good headphones like Beats or the airpods, but Apple doesn't seem to support those standards, it only works with other Apple devices.

Honestly Apple's Airpods Pro and Airpods Max sound pretty great though, and I own several pairs of "audiophile" IEMs and headphones.

Will agree about Airpods Pro, but Max are a notch below the competition (Bose, Sony, Sennheiser).

Apple hardware sound is the best on the market (macbooks, ipad, etc.).

It is unwise to dismiss their prowess


Still pretty happy with my Homepods in Stereo

These earphones are not for people who appreciate good audio. They are for people who want more products in the Apple ecosystem and have lots of money to spend.

The Apple designer who designed them owns a music label and obviously appreciates good audio

It's telling that the measure of quality of life you use in this comment is entirely materialistic in nature. I also challenge the idea that US provides 'access to better medical care', as it is pretty well documented that Americans spend more for lower quality care compared to similar developed countries.

I believe this cultural divide is a big reason America won't make it back to the top - insatiable desire for wealth and a lack of values-based principals. Ironically US companies are the first to tout their 'values' in the workplace.


> I believe this cultural divide is a big reason America won't make it back to the top

What top are you referring to?

We're in a thread about a US company announcing its new $30B fundraise from a group of elite US growth investment funds arguing about whether this company will be able to overthrow the $4T US tech behemoth and suggesting that all the other US tech behemoths are actually stifling progress.


Seems like you’re in a thread about people’s quality of life and talking about giant mega corps’ big money. Has it been trickling down yet?


If you are in the bottom 30% of earners, the EU is better.

If you are in the top 30% of earners, the US is better.


And the top 1% get to have fun on a private island.


My take: Discord will slowly enter the arena with the likes of Google, MS, Meta, Apple, Valve for their massive user network. The amount of resources needed to sustain free offerings for so long make it a nearly insurmountable moat for others to compete.

Even if one could reproduce their tech (which I doubt, they are top-tier), individuals would drown under hosting costs. They've positioned themselves incredibly well.


Discord's technical moat is not that large. They've done well, but it's not the same as something like Google or Microsoft or valve.


Having read their tech blogs about both the local video capture tech and their infrastructure optimizations, I disagree.


You think discord is the same order of magnitude as a Microsoft or Google? Both of whom own major operating systems, browsers, ad platforms, office and workspace suites, app stores, cloud providers, search engines, maps products, developer tools suites, robotics, research laboratories, AI products, meeting and chat infrastructure, email products and any a million other things?

The moat there is staggeringly large comparatively.


I feel like you're nitpicking my comparison and misunderstanding what I said. I did not say these companies were technically equal. If you have more questions after rereading my statements, feel free to drop them in this thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: