I wish they would do this when you're boarding the plane. I get that there is essential information that everyone needs to know, but if you're a frequent flier you've probably heard the "put your larger carry-on in the overhead bin and your smaller bag underneath the seat in front of you" hundreds, if not thousands of times.
There's a large subpopulation of people flying who seem to have no idea how planes and airports work. Maybe they're sleep deprived or it's their first time flying, but these announcements are targeted at them.
I think its more likely that the people do know they just don't care and it helps them to put their backpack overhead so they do it anyways. There is minimal/no enforcement.
I'm very much a we-live-in-a-society, follow the rules kind of guy, but if I checked a bag and only have my backpack in the cabin, you bet your ass I'm going to try and find a place for it in the overhead instead of cluttering up where I want to put my feet. The flight attendants can go scold the passenger with the oversized roller + backpack + 20 liter "purse" instead.
Yes, the logical rule would be 1 bag in the overhead per person. If they enforced carry-on sizes strictly and charged less for checked luggage the problem would probably go away.
It has nothing to do with price. I don't check luggage on domestic flights because of the enormous time lag for the airport to give me back my luggage. (There's also "United Breaks Guitars", but that's an independent problem)
If I could walk from the plane to the luggage area and my luggage was already there 90% of the time, I probably would check more things.
However, the US airports simply don't employ enough people to move the luggage around fast enough.
The is 100% correctable by employing more people. But some CEO needs another yacht, so they don't. So, I simply don't check luggage.
I remember one time I had to fly back from a business trip on the Wednesday before Thanksgiving. Made me realize there is something about business travelers, they cut towards situationally aware and self conscientious types. The opposite of people flying the day before Thanksgiving.
I flew into the Orange County Airport before they tore it down and made it like the others. Felt very civilized. As I get older I find the hostile public spaces and infrastructure more and more annoying.
Unfortunately there's also a large subpopulation of people flying who wear noise-cancelling headphones and have their eyes glued to their phones; choosing to be disengaged from their immediate surroundings.
Especially flying with kids at naptime or bedtime. Trying to get an extremely tired toddler to fall asleep on a plane just to hear an announcement about in flight entertainment. OMG.
If you're on a Mac, use the MLX backend versions which are considerably faster than the GGML based versions (including llama.cpp) and you don't need to fiddle with the context size. The models are `qwen3.6:35b-a3b-nvfp4`, `qwen3.6:35b-a3b-mxfp8`, and `qwen3.6:35b-a3b-mlx-bf16`.
I was comparing various models at M5 Pro 48GB RAM MLX vs GGUF and found that MLX models have a higher time to first token (sometimes by an order of magnitude) while tokens/sec and memory usage is same as GGUF.
Gemma 3 27B q4:
* MLX: 16.7 t/s, 1220ms ttft
* GGUF: 16.4 t/s, 760ms ttft
Gemma 4 31B q8:
* MLX: 8.3 t/s, 25000ms ttft
* GGUF: 8.4 t/s, 1140ms ttft
Gemma 4 A4B q8:
* MLX: 52 t/s, 1790ms ttft
* GGUF: 51 t/s, 380ms ttft
All comparisons done in LM Studio, all versions of everything are the latest.
The 35b-a3b-coding-nvfp4 model has the recommended hyperparameters set for coding, not chatting. If you want to use it to chat you can pull the `35b-a3b-nvfp4` model (it doesn't need to re-download the weights again so it will pull quickly) which has the presence penalty turned on which will stop it from thinking so much. You can also try `/set nothink` in the CLI which will turn off thinking entirely.
5 years is normal-ish depreciation time frame. I know they are gaming GPUs, but the RTX 3090 came out ~ 4.5 years before the RTX 5090. The 5090 has double the performance and 1/3 more memory. The 3090 is still a useful card even after 5 years.
The instruct models are available on Ollama (e.g. `ollama run ministral-3:8b`), however the reasoning models still are a wip. I was trying to get them to work last night and it works for single turn, but is still very flakey w/ multi-turn.
reply