Hacker Newsnew | past | comments | ask | show | jobs | submit | more helldritch's commentslogin

The equation for escape velocity is: v = √(2GM/R)

R = distance (radius, really) M = Mass of the body G = Universal gravitation constant

We can modify this equation to find for the distance at which you can escape:

r = 2GM/v^2

The answer is largely: it depends on how fast you can go, at the speed of light you can escape from further away, since the pull will increase the closer your are to the "event horizon".

I'm in a car right now (as a passenger ofc) doing this from my phone so not in a situation where I can put together a model, but you should be able to plug in some numbers and estimate a result, just make sure you convert to SI units so you don't accidentally end up 3 orders of magnitude off.

A black hole with the mass of the earth would have a radius of about 2cm, so things less massive than a planet start to get very small, very fast, and you end up fighting quantum effects which become less intuitive.


</sarcasm></sarcasm>

<i>Felt like I needed to resolve that or we may never stop being sarcastic</i>


You have untreated ADHD, that's why your life keeps feeling apart when you make impulsive decisions.

They missed it when you were a child and none of the specialists thought to diagnose it as an adult.


This has been really frustrating me lately.

Sometimes I just want to quickly merge a small change (maybe a small config change, 1/2 lines) and then pull on master, branch off and start working again.

I'm regularly having to wait several minutes for the merge and while I know that there are ways around this locally it just annoys me that something so simple is taking so long.

This feels less like an apology and more like Atlassian saying "Us changing a platform which has worked a certain way for years and that breaking your workflow is YOUR PROBLEM. It's you looking at this wrong, merges have been asynchronous all along." despite our many combined millennia of experience being entirely to the contrary.


I'm really sorry if it comes across that way. We definitely don't think you did anything wrong. I also get that when things work a certain way for a long time, it's totally reasonable to expect them to keep working that way.

If I could distill the message re: slower merges down to 2 essential points, they would be this: (1) we underestimated how impactful this would be for some customers and that's on us; (2) some, in fact I think many, users believed they need to wait for the merge to complete and we wanted to clear up that misunderstanding.

From your use case, where you merge a small change and then want to pull, create a new branch, and start working again right away, I understand this directly affects you. I am surprised merges would be super slow for you if you're just merging small changes, though; average merge times are still just a few seconds. Have you opened a support case?

For many other users, I do think the UX changes we're rolling out will make a difference. There are a lot of users who would click merge, maybe 5-10 seconds would pass, and they would assume something must be wrong so they'd refresh the page and then it would look like nothing happened. Today we pushed out an update so that if you refresh the page and the merge is still in progress, you'll actually be able to see that.

FWIW we do have some longer-term work in progress that will make merges (along with basically all file system I/O) a lot faster; but it's a ways off and represents yet another significant architectural project (though much less disruptive than this one!). I didn't mention it in the article because it will take a while.


I know we don't have to wait for the merge but having an operation that is syncronous and takes <1s on my comptuer but being asyncronous and taking 30 seconds to five minutes on Bitbucket is quite infuriating. Often I want to merge a PR, then switch to my term, pull master, and start branching from the new head. Now I cannot do this. Also I want to merge a bunch of PRs it gets even more confusing.

I suggest you up the capacity on the queue so it feels syncronous and snappy like GitHub, as this is now a pain point.


We know that CI build efficacy drops with duration. People wander off to do other things, and either come back to it later than anticipated (all estimates are off by 2x, including estimating when I will check the build again), or forget entirely. Merge-build is one action, and making merges async increases the perceived build time. Perception of sequential delays is a fundamental UX concept. They are seen as a single, longer delay.

That’s not the only reason merges are synchronous, especially in the Atlassian world.

At scale, we have problems with build triggers from commits. Sometimes you have to fire manually.

At scale, tracking deployment failure is painful. The PR you merge may be in a different module than the deployment plan. So I need to merge a PR, then watch the dominoes fall. But the bigger issue is that Atlassian never finished getting deployments up to feature parity with builds. I have several dashboards that report build health, but deployment health is hard to track. It involves more vigilance and frankly it’s exhausting. Exhausting things get dropped every time people feel tired. To work around this, first you do alerts to team chat, then there are too many alerts so they go to their own channel, then people forget to check the channel, and we have a smaller version of the same problem and no solution.

At scale, I can only control whether MY code has enough tests to detect regressions prior to deployment. Breaking preprod impresses no one, even if automation should have caught it. That means a workflow of merge-build-test prior to moving on to the next task.


I order meat from The Great British Meat Company and my meat arrives 2 days later, still refrigerated, in a hard polystyrene case packed in with still frozen ice cube bags. It costs a little more to deliver, but it arrives upright and in a perfect condition at 3 degrees Celsius (~38f).

Its ridiculous that something of regular shape, in a regular box, is so difficult not to damage in transit especially from a company like Lego who exist purely on the sentiment parents and children have towards their brand.


Those meat packages end up getting very different handling, because of the dry ice that's used to keep them cold. It's necessary to keep a careful track of the dry ice and know how much of it you're loading into a container, in order to avoid asphyxiating your colleagues. Most packages are just tossed along as quickly as possible, because there's a tight schedule to keep. But you have to temporarily slow way down when a package containing dry ice comes down the belt.

It's a little bit like that photographer's trick of packing a starter pistol with your camera equipment when flying, in order to ensure that the baggage crew is extra careful with your stuff.

Source: I used to schlep packages at a UPS hub.


Just to clarify: the meat packages I ordered didn't use any dry ice. They provide two (standard) ice packs, one which is placed beneath the meat and one which is placed above the meat.

Because the polystyrene is so thick and they send so much ice, the ice was still fully frozen when I received it 42 hours after my order.

We've actually kept the polystyrene box and ice packs to reuse for picnic days and things like that.


I believe it's BeyondCorp: https://cloud.google.com/beyondcorp


BeyondCorp is their term for secure communication between users and services, BeyondProd is for communication between services: https://cloud.google.com/security/beyondprod


Thanks for the link, didn't know that existed.

Reminds me of when Google security engineers...ahem...reacted...to the Snowden leak that the NSA was spying on internally decrypted traffic. [1]

It's insane to me that the US spied on an American company's internal traffic, got busted, and the only viable response was "well, I guess we have to make HTTPS mandatory in the protocol now".

Not that it was the wrong response. Just that it wasn't even on the table to say "hey, NSA, wtf..."

Unless you see the NSA as some sort of weird, legally protected black hat blue team.

[1] https://arstechnica.com/information-technology/2013/11/googl...


I'm not defending this thought process, and I disagree with it, but I can at least understand.

You basically have two levels of recourse against the state for redress of grievances: elections and litigation. You absolutely can, and I believe every major corporation should have, sued the NSA and the larger federal government post-Snowden. I'm not a lawyer so I don't know the specifics of what that would have looked like but it seemed like a pretty egregious violation of a whole list of rights. But litigation, especially against an intelligence agency, seems a little quixotic in this context.

Elections don't seem like they'd have much impact against the NSA. They're not elected officials, and only the top leadership is appointed. Most of them are career bureaucrats (I don't necessarily mean that in a negative way) and scientists/mathematicians. You could replace a third of the Senate and the entire House every 24 months and you're not going to get sweeping changes throughout the NSA. From a national security, somewhat hawkish perspective, that might be a feature rather than a bug? But from a civil liberties/freedom perspective it's definitely a concern.


This is one of those comments I so rarely see where what I already know is put in to words I've never been able to find.

Thank you for this, I'll be updating our documentation to include these concepts first thing tomorrow.


At least in terms of movies, this seems like the obvious route:

It used to be that game engines had real-time processing requirements (60/30/15 frames per second) and were constrained by the hardware on which they ran, but movies looked more photo-realistic and could take hours or days to produce a single frame.

As silicon and algorithms get faster and closer at approximating reality, the difference between the output of the two softwares approach one another. Given the amount of effort required to build these softwares it makes sense to save the years of effort, cost involved and risk involved with what essentially amounts to building a bespoke, internal graphics / physics / particle simulation engine.


Home photo-lithography and chemical etching setups aren't common, but have been done by several people. We wouldn't be able to jump straight to 14NM, but we would probably be able to get to the 500-300nm size relatively quickly (a year or two, maybe, if starting from scratch) and shrink down from there.

Devices would be much bigger and less efficient, but we would be able to run code and pump out 8086 processors within 6 months.


That's just one layer of the stack though. Future archaeologists will also need to create mock npm registries and maven repositories, and set up docker and k8s so they can deploy a complex set of microservices to look up our birthdays.


...all the code to which should be right in the Github Vault, right?

Idk, the hardware part seems much more difficult to me.


Thanks for the laugh! I needed that today :)


The typical way to produce small scale liquid oxygen (LOX) requires liquid nitrogen (you use the liquid nitrogen to cool the gas as it passes through some transport medium like a copper pipe and collect the cooled liquid).

Purchasing LOX isn't too difficult, there are many vendors willing to supply large amounts of it, and a few vendors willing to supply small (experimental) amounts of it.

Storing LOX is the real challenge, you need to keep it at ~310f (~-190c) which requires a well insulated cryogenic storage chamber. Typical industrial sized storage tanks have a few metres of insulation to keep the heat out, and typical small-scale tanks have around 1-2 feet of insulation.

You're probably looking at a viable minimum storage container size of between 500-1000 litres (any smaller and your insulation layer will be so thin that you'll lose 10-20% of your stored oxygen as gases per day), with the typical minimal size being in the 2,500 litres range (which normally lose 1-2% of stored oxygen to gas per day)

You can buy very small containers (2l, 5l, 25l, etc, these are for nitrogen but you can get them for oxygen too): https://uk.vwr.com/store/product/2104367/liquid-nitrogen-dew... but these lose the liquid very quickly and you would likely want them delivered the same day you need them, resulting in a just-in-time delivery failure point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: