Hacker Newsnew | past | comments | ask | show | jobs | submit | mikehall314's commentslogin

Yeah, the condensation issue was a problem for me too. I ended up having to take them in for repair to Apple several times.

The noise cancellation went flakey, the auto-switching went flakey, they would crash [1] if lifted one cup from the side of my head. Apple ended up replacing them completely, but then the new pair eventually developed all the same faults.

I hope some of these are addressed in the new models, because despite all this I really enjoy them as headphones.


[1] Playing audio would stop, the noise cancellation would shut off, they would disconnect from my phone, seemingly reboot, then reconnect again.


Ah mine do this. It’s suuuper annoying. I assumed it was because I use them connected to my work Linux workstation.


You're correct but as Uvix has said, BBC Enterprises made film copies for overseas sales before the original tapes were erased.

The earliest episode to survive on its original videotape is Ambassadors of Death episode 1 from 1970. None of the original 60s tapes still survive, though I believe there is at least one tape that we know used to have Doctor Who on it but which now has another programme.

The earliest episode to survive in its original medium is possibly The Dalek Invasion of Earth episode 5 (The Waking Ally). That's because, while this was shot on electronic studio cameras as usual, there were no videotape machines available to record.

Instead the output of those cameras was telerecorded straight to 35mm film. AIUI the negative of that telerecording still exists.


> there is at least one tape that we know used to have Doctor Who on it but which now has another programme.

Recording over another recording does not completely erase the other. I wonder if it could be recovered.


It has been suggested numerous times, but the BBC didn’t just record over the top - the tapes were erased with a degausser before reuse.


Someone at the BBC with a degausser yelling "Exterminate! Exterminate!"


Oh well.


I worked in a broadcast company archive (doing database work). Tapes were often reused. Fragments of previous recordings -- sometimes just a few frames, occasionally many minutes -- may remain at the beginning or end of the tape. AFAIK tapes were never completely erased before recording over top.

I was invloed in a digitisation project, the scanning companies were instructed to process the whole tape in case there were fragments of older programs at the end. A 30 minute tape may have 15 minutes of program, then a period of blank/black, then the remains of an older program for several minutes after that.


It’s easy enough to say “put the zip first because that will tell you city, state and country in one input”.

What happens to customers not in the United States? They have no zip to enter. Or if they have a postal code of some stripe, it has a different format.

What about folks who are in Turkmenistan, that you’re grumpy about having to scroll past? How are they signing up?


This already works on most shop systems in Germany. The OP just have to learn that zip codes aren't international. You can't know everything. Why this automation isn't implemented when you select USA as country...I don't know, but when you select Germany it works.


I think they can just skip the zip code entry and enter everything else as usual.


A lot of the times both "zipcode" and "state" are mandatory fields.

Zipcode is easy, the platform likely wants my postal code.

You have to be a little bit more creative with state, sometimes "We don't have any states" is fun to see printed on your address label on a parcel, other times "Denmark" could be considered a state in the EU and that can be an answer, but most times "N/A" is enough.


Every so often I get weirdly obsessed with Objective-J, which "has the same relationship to JavaScript as Objective-C has to C". It is (was?) an absolutely bonkers project. I think it has more or less died since 280 North was acquired.

https://www.cappuccino.dev/learn/objective-j.html


Didn't expect to see cappuccino mentioned ever again. It was so wild, you can use AppKit documentation for cappuccino. Apps were so pretty and yet so fast.

I remember back in 2009 I really liked their coffee machine icon. I emailed the devs, they referred me to some design studio, and then to my surprise they replied and said that it's Francis Francis X1. Now I'm looking at it in my home office.


Same. I remember when this first came up and I was like "this is so weirdly interesting."

Sad that they got acquired because it was just fascinating what they were doing, even if I was never going to use it.


Holy shit, it’s still being actively developed and maintained https://github.com/cappuccino/cappuccino


More amazingly the guy doing the most recent maintaining[1] is a medical Professor at Freiburg Uni.

[1]: https://github.com/daboe01


And wow, it's basically a web version of Cocoa! Check this out: https://ansb.uniklinik-freiburg.de/ThemeKitchenSinkA3/


Anyone know if this is or ever was the basis for Apple's iCloud web apps on iCloud.com (e.g. Keynote / Pages / Notes etc.)? Those apps are heroic attempts to replicate the desktop app experience in the browser. I'm curious what web framework is underlying it. Side note - if I could install 3rd party apps w/ similar UIs in my iCloud dashboard that would be interesting.


I think originally Apple was using SproutCore, which had similar aspirations to produce "desktop quality" web apps, and was one of the early frameworks to implement things like two-way data binding. This was back when iCloud was called MobileMe.

SproutCore 2.0 became Ember.js 1.0, but I don't know if Apple are still using it.


Oh wow I didn’t know that’s where Ember came from.


Cappuccino was not an Apple project, so I doubt that is what Apple used to develop those projects. That, and 280 North eventually got acquired by Motorola.


Weren't they acquired by Motorola?


Yes, after which they announced they were canning their "Atlas" project, which was meant to be an Interface Builder for the web. Motorola decided they wanted to keep the technology in house.

No idea if they ever did anything with it!


Best I can tell, it turned into Google Web Designer!

I'm on the outside, but best I can tell:

- You're thinking of a UI design tool called "Ninja"

- Google purchased Motorola Mobility and the Ninja project got cancelled

- Google launched Google Web Designer, that basically had an almost identical UI. As far as I can tell the internals are different, but probably shared some code or at least design work.


Possibly! The tool was definitely named Atlas when it was going to be an open source tool made by 280 North [1]. But it could have been renamed Ninja after Moto acquired it.

[1] https://arstechnica.com/gadgets/2009/03/atlas-a-visual-ide-f...


I assume the 98% compatibility on ES6 for V8 is because they don't have tail call optimisation?


Pretty much. ES6+ scores are from running compat-table's test suite (https://compat-table.github.io/compat-table/es6/), along with their weighting. If you click on an engine's name to go to a page about it, there's a report at the bottom with failing tests.


Maybe because with tail call optimization you wouldn't have a proper stack trace?


I never understood this complaint. You won’t get a “loop trace” when you convert your tail calls into an iterative algorithm. And your code will be less readable to boot.


I don't know fast it would be if it was done iteratively. But Apple's implementation has negative implications for debuggability: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... https://webkit.org/blog/6240/ecmascript-6-proper-tail-calls-... .

V8 team decided that it's not worth it, since proper stack traces (such as Error.stack) are essential for some libraries, such as Sentry (!). Removing some stack trace info can break some code. Also, imagine you have missing info from the error stack trace in production code running on NodeJS. that's not good. If you need TCO, you can compile that code in WASM. V8 does TCO in WASM.


That argument was vaguely plausible until WebKit/JavaScriptCore shipped PTC and literally no one bat an eye.

Bun users don’t care either.

At this point it is pure BS.


> Bun users don’t care either.

Most Bun users don't even know about this (unless they are bitten by this). That doesn't mean absolutely no one cares or would not care even though such complaints might be uncommon.


The difference is loops don't normally have traces but function calls do.


Supposedly, although the team at Apple were able to implement it. I think they had some oddly named technology like Chicken which created a shadow stack trace? Half remembered.


Yes, It's called ShadowChicken, and it has negative implications for debuggability. To make debugging tolerable, JavaScriptCore added an (intentionally silly) mechanism called ShadowChicken: a shadow stack used by Web Inspector that can show a finite number of tail-deleted frames (they mention 128). It's has some tradeoff.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... .

https://webkit.org/blog/6240/ecmascript-6-proper-tail-calls-...

V8 team decided that it's not worth it, since proper stack traces (such as Error.stack) are essential for some libraries, such as Sentry (!). Removing some stack trace info can break some code. Also imagine you have missing info from the error stack trace in production code running on NodeJS, that's not good. If you need TCO, you can compile that code in WASM. V8 does TCO in WASM.


I agree 72 characters is plenty for most circumstances. However, as the blog points out, this is a byte limit not a character limit.

Some of the family emoji can be > 20 bytes. Some of the profession emoji can be > 17 bytes. If people are using emoji in their passwords, we could quite quickly run out of bytes.

I think it’s a limitation worth being aware of, even if “unsafe” is perhaps overstating it.


I still don't see how that's an issue, yes a password using a series of ridiculously complicated family emoji will be truncated but the actual bytes still provide entropy, just because the data doesn't use pixels when rendered doesn't mean it doesn't increase the search space


If your password is comprised of three emojis that each take up 24 bytes, then yes, a 72 byte truncation dramatically reduces the search space for a brute force against these hypothetical 24-byte-emoji-only passwords.

There are far fewer possible combinations of any three emojis than there are any 72 ASCII characters.

This is x^3 vs y^72, where X is the total number of distinct emojs and Y is the total number of distinct ASCII characters.

24 bytes of data is not 24 bytes of entropy if there are only a couple thousand different possible inputs to produce all of the possible 24 byte sequences produced by those inputs.

For simplicity: picture having only two possible input buttons. Each one produces 1000 bytes of random-looking data, but each one always produces the exact same 1000-byte sequence, respectively. You have a maximum password of 1 button press. The "password" may contain 1000 bytes, but you only have one bit of entropy, because the attacker doesn't need to correctly guess all 1000 bytes, they only need to correctly guess which of the two buttons you pressed.

Of course, in practice, not all emojis are 24 bytes, and I'd assume few people are using emoji-only passwords, but the distinction between bytes of data and bytes of entropy is worth clarifying, regardless.


I would argue that a password containing emojis is unlikely to ever be cracked, because no attacker is going to test emojis unless they have some reason to believe you use them in your password.


Attackers don't come up with every entry on the wordlist they throw into hashcat themselves. The attacker's imagination has essentially zero correlation with the contents of their wordlist.


Okay. How many major wordlists include emojis?

Maybe...like...a dozen entries at most across all of them?


Rest assured, the world's intelligence agencies and cybercrime rings aren't just taking vanilla open source wordlists off github and hoping they get lucky.

You don't know what your adversary's wordlist contains, and assuming you do is a recipe for overconfidence.


Yes, "if your enemy is state sponsored attackers" you shouldn't do many things, like use bcrypt incorrectly, or really passwords almost at all. That's obviously not what I'm saying.


Okay, use emojis in every password, you win, you're right, emojis make your password hack-proof to everyone who isn't the NSA.


That is also not what I said either, but I admire your dedication to engaging in bad faith.


The hash is 24 bytes. Even without an input character limit, you're likely to find tons of valid aliases for your 1000-character password within the 72-byte password space.


You could always pre-hash the password with sha256 or something similar to guarantee you won't go over the 72 byte limit.


I don't understand why this isn't a mandatory first step in the bcrypt algorithm itself. Who thought that a 72 byte limit was a good idea?


Does anyone actually use emoji as a password.


I never actually considered it until I read parent, and now I'm gonna try to start using it wherever it's supported, it's genius to use it for passwords as long as it's supported by the platform. Edit: Just to clarify, together with a password manager of course, otherwise I'd never have the patience for it.


yea, me (pls dont crack)


You could ask for your password to be removed from the list: https://github.com/danielmiessler/SecLists/pull/155


Correct. The Beatles appearance on Top of the Pops survives only because a clip from that show was used in episode 1 of The Chase.

Ironically, The Chase often has rights clearance issues when it comes to home release because of this. Beatles music costs a fortune to clear, making releases untenable.

Double ironically. This is because the Beatles chose to mime to their studio record for Top of the Pops. If they had played live, it would have been less of a problem.


FYI pretty much nobody played live on Top of the Pops


Nobody sang live, but they were on stage while the filming took place


I seem to remember looking into this once. Aren't large swathes of TOTP itself missing? Like, entire early decades?


But all it takes in that world is for a single browser vendor to decide - hey, we will even render broken XHTML, because we would rather show something than nothing - and you’re back to square one.

I know which I, as a user, would prefer. I want to use a browser which lets me see the website, not just a parse error. I don’t care if the code is correct.


On the one occasion I met Steve Furber, he told me about how when they connected up the very first chip, he was surprised that the it started running before he had even connected the power.

Turns out the design was such a low power design that just the voltage from the data lines was enough to run the chip.


I assume this is the reason they're characterising this as "PHP License V4 will have identical wording to BSD-3" rather than "We will switch license to BSD-3".

It amounts to the same thing, but the former framing means they're covered by the "or later".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: