Hacker Newsnew | past | comments | ask | show | jobs | submit | dbatten's commentslogin

> it feels like Mac's UI is optimized around the assumption most users won't expand windows to fill the whole screen, but rather leave them half-sized somewhere in the middle

IMO, this has been their assumption for years, and it actually turned me off when I tried getting used to Mac circa 2006-2007. Coming from Windows at the time, I just couldn't get over a weird anxiety that my application window wasn't maximized, because it didn't look like it completely snapped into the screen corners.

Now, using 34-inch ultrawide monitors almost exclusively, I never maximize anything... it'd be unusable.


As a 38" ultrawide owner myself, I use vscode or intellij maximized most of the day, depending on the codebase I'm

Browsers only ever get maximized to the left/right half screen for me too

Which is something macos should really improve on though, the ux is pretty bad compared to Windows and Linux there


I split a vs code window and a browser or a browser and terminal window on my 13" mb air. Usually need additional context on the same screen.

MacOS has a built in 4x4 window tiling which works for this purpose for me. I don’t find ever wanting more than 4 windows open on an ultrawide. Definitely not as powerful as something like xmonad but useful for the majority of my use cases.

Windows also has this kind of tiling built-in. It even comes with default keyboard shortcuts.

So does Mac: https://support.apple.com/en-us/guide/mac-help/mchl9674d0b0/....

Obnoxiously, it's part of the recent trend of overloading the Globe/Fn key, so it's hard to do with third-party keyboards.


Can you not change the shortcut?

You can, I believe, but I often need to move between computers so I try not to mess with shortcuts too much (or go down keyboard layout rabbit holes, etc).

Saner defaults would be better.


Here to say ubuntu's got it built in as well

While I don't maximize anything on a monitor that wide, I do appreciate Window's snap to half/quarter functionality for monitors that wide, and I wish Mac had the same ability natively.

> I wish Mac had the same ability natively

Hover over the green button in the top left of the window. I recently found out about that menu for moving a window between screens, which is also an option it has. (I also just found them in the Window menu if you prefer that. I dont; the options take an extra level of hovering to get to.)


You can also long-click the button instead of hovering. Also, see the menu bar entries related to window management, which replicates these same functions but can be bound to keys in the system settings.

> Hover over the green button in the top left of the window.

Weirdly it still doesn't quite do what I want. It leaves a gap around the edge of the window for some reason.


Option-clicking the green button maximizes it similarly to Windows, rather than going fullscreen. I never used fullscreen just because of the slow animation it used, and now it makes even less sense on my new MacBook with the notch. It basically replaces the menu bar with a blank bar.

Damn. Never knew that. TIL

I will wait for you to discover these Keyboard Shortcuts - Press the `fn + ^` (that globe key + control) and then try `c`, `f`, and all of the four arrow keys.

[flagged]


Don't be a child

Vulgarity aside, I can sympathize. For years I've been told by designers that discoverability and intuitive interacting patterns are so important, yet every aspect of modern design focuses so much on minimizing "distractions" that features go undiscovered. We get forced into suboptimal workflows and usage patterns because everything gets over-fitted to the lowest common denominator.

This is the biggest reason I love Linux. I can choose my own desktop, or even forsake the desktop entirely for a simpler window manager, without changing operating systems. Some are hyper focused on a tailored experience (gnome) while others let you configure to your heart's content (kde).

There's sacrifices to be made, of course, but not having to live under the oppression of Apple's beneficiary dictator designers is absolutely worth it for me.


This, exactly.

Every MacOS app has a menu item explicitly made for this exact thing. It's often the third item in the menu:

    File    Edit   View

But they refuse to put these viewing options under the View menu item. Why? Why would you not put these really great viewing options under View?

It's under the Window menu?

I’m pretty sure it does? I haven’t installed anything and it has the ability to do half and some other layouts through the window menu and snapping IIRC

I can't speak to the quarters but you absolutely can snap windows to the left and right halves in MacOS.

i do quarters all the time. it used to be with third party apps. iu think its native now

You can hold the 'option' key while dragging a window in order to set it in mosaic mode (you may need to activate the mode in Settings > Finder and Dock > Windows)

I'm pretty sure Tahoe added that behavior natively. I personally use Magnet on Sequoia, however, so I am not 100% certain.

This was added as built-in functionality in Sequoia, not Tahoe. Personally I still use Magnet, which has worked well for over a decade and has a few more options.

I constantly stretch windows to maximum height.

I maximize windows of graphics and video editors.


I just installed Kubuntu last week so I could get the additional shift-drag targets to split my 34" ultrawide into 3 sections, or bump to the edges for the half filled.

Install i3wm, it will change your life.

Something I realized after spending a few months in sway (i3) and then niri is that I only care about a few windows (code editor, terminal, browser, apps I use moment to moment).

All the rest I'd prefer to just summon as-needed and then dismiss without navigating away from the windows I care about.

sway/niri want me to tile every window into some top-level spot.

Took me a while to admit it, but the usual Windows/macOS/DE "stacking" method is what I want + a few hotkeys to arrange the few windows I care about.


Yeah, I came to the same conclusion a few months back. Sadly I had to ditch KDE for GNOME due to an issue[0] specific to NixOS but after going through the gauntlet of tiling window managers and PaperWM/Niri over the years I've also settled on a traditional DE.

[0]: https://github.com/NixOS/nixpkgs/issues/126590


I'm surprised to hear that niri didn't work for you, I feel like it's a really good middle ground between tiling and floating window managers. It handles a lot of window resizing and arranging for me, without being too rigid. Windows can have any width they need without having to evenly divide my monitor.

In sway, put the lower priority windows in another workspace, or the scratchpad, or in tabs/stacks. You can bind keys to focus specific programs by their appid/class also, so even if they're on another workspace or monitor it'll jump right there.

It sounds like the scratchpad may be especially close to what you want.


Maybe awesome-wm would be better for you then.

I’m currently using Krohnkite [1] to get dynamic tiling in KDE, and Klassy [2] to get i3wm-like pixel borders instead of full window decorations.

[1]: https://github.com/esjeon/krohnkite [2]: https://github.com/paulmcauley/klassy


Ultrawide without a virtual screen manager is a missed opportunity. Maximize window is still very useful with virtual screen areas on large screens.

Brother, I have 42 inch 16:9 and I always maximize everything.

At least based on some of the things he's written in some of the anthologies, it seems like a lot of that disillusionment was not just because of age, but rather because of his battles with the publishers and what not that were pushing him to make changes that he felt would compromise the integrity of the strip. A lot of the comics include subtle jabs about corporate greed, artistic integrity, etc. because he was actively fighting with the corporations that distributed his strip over such matters...

Still, 100% agreed that he stopped at the right time, both because of the creeping cynicism, but also simply because he was running out of fresh ideas...


Genuinely interested in being educated here: If Gurobi's integer programming solver didn't find a solution better than 218, is that a guarantee that there exists no solution better than 218? Is it equivalent to a mathematical proof?

(Let's assume, for the sake of argument, that there's no bugs in Gurobi's solver and no bugs in the author's implementation of the problem for Gurobi to solve.)

I guess I'm basically asking whether it's possible that Gurobi got trapped in a local maximum, or whether this can be considered a definitive universal solution.


Author here!

Yes, if Gurobi and my code run as intended and I did not mess up any thinking while simplifying my chess model, then what I did is proof that the maximum number of legal moves available in a chess position reachable by a sequence of legal moves from the starting position is 218 (upper and lower bound). Gurobi proved the entire search space as "at most as good" using bounds, basically.


In addition to the value of the best integer solution found so far, Gurobi also provides a bound on the value of the best possible solution, computed using the linear relaxation of the problem, cutting planes and other techniques. So, assuming there are no bugs in the solver, this is truly the optimal solution.


Unless I missed something, though, the highest bound the author reported for the relaxation was 271 2/3 moves, which is obviously significantly higher than 218...


I think that was an intermediate model. The author updated it, then Gurobi solved the new model to optimality (i.e., the bound became equal to the value of the best solution found).

> With this improved model, I tried again and after ~23 000 seconds, Gurobi solved it to optimality!


> Gurobi solved the new model to optimality (i.e., the bound became equal to the value of the best solution found).

Ah, I was not aware that that's what this language indicated. Thanks for helping me understand more!

I've used Gurobi (and other solvers) in the past, but always in situations where we just needed to find a solution that was way better than what we were going to find with, say, a heuristic... I've never needed to find a provably optimal solution...


The article was interesting, but this bit felt a bit like "and then a miracle occurred".


The fractional stuff was poorly explained, I didn't understand it at all.


I'm not sure about Gurobi or how the author used it in this case. But in general, yes: these combinatorial solvers construct proof trees showing that, no matter how you assign the variables, you can't find a better solution. In simpler cases you can literally inspect the proof tree and check how it's reached a contradiction. I imagine the proof tree from this article would be an obscenely large object, but in principle you could inspect it here too.


In theory, it's not proof. In practice, it is.


Well, if the solver isn't wrong and there were no bugs in impl, yes, the approach is rigorous. Allow strictly more "powerful" configurations yet still prove that the maximum is X, then achieve X through a construction, is standard math


Any chance she's going to write anything about this when she's done? I'd love to read somebody's account of this experience. Very cool idea.


She’s been encouraged to by many people who’ve been moved by her letters including myself. If you want to message me, I can share more when she does.


Also interested; how can I message you?

By the way, I just wanted to also say I love big chunks of this whole comments section; so much pure positivity and human beauty in one place, so soothing, uplifting, and inspiring to me. Thanks hereby to everyone sharing around here how they're consciously recognizing and acting in various amazing ways on the good they received.


If you find this interesting, definitely consider checking out contraction hierarchies. One of the early algorithms used by mapping software to enable calculating fastest routes between any pair of places. It's exact, and it's orders of magnitude faster than graph search algorithms like Dijkstra.

This webpage has a very intuitive graphical explanation of how it works: https://www.mjt.me.uk/posts/contraction-hierarchies/

(I had the joy of implementing this in Python with OSM data a few years ago. Planning a three hour car trip with software I wrote and having it come back with the same path recommended by Google Maps in a matter of milliseconds was a very rewarding feeling.)


To make matters worse, the title and featured image make it very clear that they have confused "macaroons" and "macarons."

Macaroon: https://en.wikipedia.org/wiki/Macaroon Macaron: https://en.wikipedia.org/wiki/Macaron


To be fair, this is a mistake that started with the Google paper, and everyone else just copies the mistake.

The paper calls them Macaroons as a play on (browser) Cookies with layers (of caveats) - so clearly they meant macarons as well, since a macaroon doesn't have layers. Or at least, that's always been my interpretation of the name. It's possible it was just an arbitrary play on hMAC cookies and not the layers?


I had that thought, although according to another comment the definitions have crossover. Probably because people so frequently confuse the two, but here we are.


This hurts my braincell.

On "macaroon":

> The name "macaroon" is borrowed from French macaron

On "macaron":

> A macaron, or French macaroon, is a ...


found this helpful video posted on a different subthread: https://www.youtube.com/watch?v=nzcHeO43kgE&t=622s


THANK YOU

came here to say this


Used to use some of this data for analyzing cellular service coverage. The info on cellular data (e.g., LTE) availability is provided by the cell providers and is usually... very optimistic.

I'd assume the terrestrial fiber/cable/DSL data is more accurate, though?


Not really, up until the past year or so, the cable ISPs would count an area serviced if they could get one house per zip code. I think this is stricter now, but these maps will help keep them accountable if we update and flag incorrect listings.


Google still does this! They seem to use some sort of generative neural net to produce fake names, descriptions, and even reviews for the trap locations.

You can report the location to Google as "bad data" and they'll delete it, but then a totally different fake location will re-appear in its place a few days later.


I worked once with a contractor who did not have a physical location for their business. They simply worked out of a minivan which they drove to wherever they were needed. But they pretended, to google, yelp, and probably others, to have a business location.


It's worth pointing out that the conventional bombing raid of Tokyo on March 9, 1945 was deadlier than either Hiroshima or Nagasaki, depending on whose counts you use...


The objection I've usually heard is less about farm land being used for pasture instead of growing crops, and more about the amount of crops that have to be grown to feed the cattle... Growing corn to feed cows to feed humans is massively less efficient than growing corn to feed humans, or at least that's how the story goes.

Is there not an efficiency problem there that we could improve on?

(Not an expert in this area, genuinely curious.)


Not expert either but from forums, I think its a bit of both. People treat pasture country like it would be good for cropping in theoretical climate discussions, and the more correct growing corn/grain/hay etc to feed cattle to feed humans.

Another factor is cattle feed is often poor quality human food, so the reject stuff. That wheat that was harvested a bit early to avoid storms and no good for the flour is great for cattle.

Also, to the main topic of this thread, I wonder if its good for food security we grow extra crops to feed livestock. If there is a food supply shortage, cattle can go to slaughter and there is plenty of extra corn or other about for humans. We eat less meat for a few years while we fix the supply chains type thing.

The way we see supply shortages from COVID, Id hope we never become too efficient in the food supply chain. The consequence of something creating lower food production is far worse than limited computer chips. Its not a system we want to be over optimised for the good times.


Sure, but go back to pasture based systems. Cows are a fantastic way to turn grass into human edible nutrients. That will probably reduce the amount of meat on the market, but no where near the drastic levels some people talk about.

All land is not fungible, we need to use the best tools for the job, and that will be a mix of a bit of everything, including animal meat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: