This describes my team to a T ... are we working at the same place?!?
We actually talk more now which helps, but it is still hard to keep up when everyone is barreling ahead doing their own thing. In addition to more talking, there needs to be a semblance of strategy that everyone is aligned on and understands their role in.
A high-agency, high-functioning team has always been a superpower, but mastering this capability is what will make or break organizations that are trying to run lean with AI. It's a "people problem" at its core, and no amount of technology can fix that.
we love to say things like these, but... most security issues are in fact BYPASSABLE - virtualization, firewalls, autorollbacks, ro-filesystems and so on are many of the tools we have on our belsts
decades of WordPress have taught us that insecure apps can 100% be securely deployed
it's a bit of an art, most recently edicated devops/sre ppl suck at it, but it's doable
...aeons a go in a former life we ran production apps that got hacked weekly, and nobody batted an eye at it, backups servers recreated from secure ro-images were span up with last-clean-app version, occassionally we had fun disassembling whatever reverse shells and other mallware that got beached on our systems (but couldn't "swim" bc everything we ran was "too exotic" for them to figure out the next steps of a proper attack), development and business continued as usual with zero interruptions etc
If you go against every principle (defense in depth, security through obscurity), maybe you should ask yourself "am I willing to be on the record saying this when my company gets hacked?"
There can be multiple reasons system crumbles, do you want to be behind one of them... intentionally?
100%. I'm willing to prioritize what matters at the right time. if "inner-system security" is not the right priority, and security can be attained at the "outer-system level" better, we should have the balz to say it. fuckitol
Imagine if your doctor said "we don't really need to do this if some other guy or nurse does a right job, so fuck it".
In other critical professions you don't want to screw up because when you lose license you're legally unemployable. Maybe it's time to require a license to be a programmer. We used to have a strong culture but those days are gone and stakes are higher. Putting people at risk because you think VC can vibe code an insecure app and then it's everybody else's responsibility to ship it securely?
you got everything I said wrong: I'm familiar with security and infrastructure best practice and I'm confident I/we can securely deploy almost any vibe-coded crap someone can throw at us - we understand security, we understand defense-in-depth, we understand the subtle trade offs of why security by obscurity is usually a bad idea (and when it does help) etc.
sure, if the vibe-coded sloptopus does bank transfers and stuff, properly carving out these pieces out of it might require actual engineering work before containerizing it - but someone is willing to pay for it it can be done
some "toy" example: take a crappy app that stores llm keys in config files that the llm agents themselves can edit - after isolating it up, but an llm proxy in front of it and have those keys be short lived proxy-keys with aggressive rate limits and monitoring etc etc
isolation, injecting proper monitoring into code of apps, putting proxies between app and apis, and layers between app and infra it runs on or touches etc
and these things now can be mostly cookbook-ified / automated 90% of the way too
as long as you can shop things into little ppl and ensure short-lived and granular access to valuable data you can 100% run totally unsecure and buggy code reliably and get value from it
it's engineering and understanding security from first principles [and a culture arund it - that _is_ the HARD af bit though...] instead of just believing in "secure app best practices" from the "holy scriptures" - secure apps are hackable, and unsecure apps can be unhackable, heck even mil systems run on unpatched old software everywhere, they're just properly insulated, the components are insecure but the system as a whole can be perfectly secure
ffs sake, u get the point... "under threat models x, z & q that are considered for scenarios ..."
anything deployed is hackable ofc, question is just the profit/risk ratio a business tolerates/prefers, and what backup plans exist to "reboot" after fatal incidents
nothing's perfect in the real world but most things are survivable
reducing all risk is the same as reducing all opportunity for profit - and in a much truer sense than it seems ...as you also reduce adversary's risk to profit form you, so essentially pursuing too low risk you head towards negative sum (as security has costs) games that on average we all loose from playing
This is where curation matters, eg in a newsroom or gallery. Provenance is their job, and if done well, can connect people in a way that an unfiltered social media firehose can't.
Yea fair enough, I’m hoping I can encourage the folks in my life that are not adept at telling truth from fiction to just cut out looking at any social media firehouse.
It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.
> That seems pretty hard to read at a glance, and easy to mistype as a definition.
YMMV but let expressions are one of the nice things about OCaml - the syntax is very clean in a way other languages aren't. Yes, the OCaml syntax has some warts, but let bindings aren't one of them.
It's also quite elegant if you consider how multi-argument let can be decomposed into repeated function application, and how that naturally leads to features like currying.
> Also, you need to end the declaration with `in`?
Not if it's a top level declaration.
It might make more sense if you think of the `in` as a scope operator, eg `let x = v in expr` makes `x` available in `expr`.
> Then, semicolons...
Single semicolons are syntactic sugar for unit return values. Eg,
Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.
Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.
This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.
Yeah their startup times aren’t great. They’re making a trade off by loading a ton of data up front, though to be fair a lot of the local first web tooling didn’t really exist when they were founded - the nascent Zero Sync framework’s example project is literally a Linear clone that they use as their actual bug tracker, it loads way faster and has similarly snappy performance, so seems clear that it can be done better.
That said at this point Linear has more strengths than just interaction speed, mainly around well thought out integrations.
I hate to be a hacker news poster who responds to a positive post with negativity, but I was also surprised at the praise in the article.
I don’t find Linear to be all that quick, but apparently Mac OS thinks it’s a resource hog (or has memory leaks). I leave linear open and it perpetually has a banner that tells me it was killed and restarted because it was using too much memory. That likely colors my experience.
> It’s worth noting that DoH (DNS-over-HTTPS) traffic remained relatively stable as most DoH users use the domain cloudflare-dns.com, configured manually or through their browser, to access the public DNS resolver, rather than by IP address.
Interesting, I was affected by this yesterday. My router (supposedly) had Cloudflare DoH enabled but nothing would resolve. Changing the DNS server to 8.8.8.8 fixed the issues.
I disagree. The actual root cause here is shrouded in jargon that even experienced admins such as myself have to struggle to parse.
It’s corporate newspeak. “legacy” isn’t a clear term, it’s used to abstract and obfuscate.
> Legacy components do not leverage a gradual, staged deployment methodology. Cloudflare will deprecate these systems which enables modern progressive and health mediated deployment processes to provide earlier indication in a staged manner and rollback accordingly.
I know what this means, but there’s absolutely no reason for it to be written in this inscrutable corporatese.
I disagree, the target audience is also going to be less technical people, and the gist is clear to everyone: they just deploy this config from 0 to 100% to production, without feature gates or rollback. And they made changes to the config that wasn’t deployed for weeks until some other change was made, which also smells like a process error.
I will not say whether or not it’s acceptable for a company of their size and maturity, but it’s definitely not hidden in corporate lingo.
I do believe they could have elaborate more on the follow up steps they will take to prevent this from happening again, I don’t think staggered roll outs are the only answer to this, they’re just a safety net.
If you carry on reading, its quite obvious they misconfigured a service and routed production traffic to that instead of the correct service, and the system used to do that was built in 2018 and is considered legacy (probably because you can easily deploy bad configs). Given that, I wouldn't say the summary is "inscrutable corporatese" whatever that is.
It's carefully written so my boss's boss thinks he understands it, and that we cannot possibly have that problem because we obviously don't have any "legacy components" because we are "modern and progressive".
It is, in my opinion, closer to "intentionally misleading corporatese".
Joe Shmo committed the wrong config file to production. Innocent mistake. Sally caught it in 30 seconds. We were back up inside 2 minutes. Sent Joe to the margarita shop to recover his shattered nerves. Kid deserves a raise. Etc.
Yeah, your operating system will first need to resolve cloudflare-dns.com. This initial resolution will likely occur unencrypted via the network's default DNS. Only then will your system query the resolved address for its DoH requests.
Note that this introduces one query overhead per DNS request if the previous cache has expired. For this reason, I've been using https://1.1.1.1/dns-query instead.
In theory, this should eliminate that overhead. Your operating system can validate the IP address of the DNS response by using the Subject Alternative Name (SAN) field within the CA certificate presented by the DoH server: https://g.co/gemini/share/40af4514cb6e
"In principle, there’s no reason that a certificate couldn’t be issued for an IP address rather than a domain name, and in fact the technical and policy standards for certificates have always allowed this, with a handful of certificate authorities offering this service on a small scale."
right, this was announced about two weeks ago to some fanfare.
So in principle there was no reason not to do it two decades ago? It would've been nice back then. I never heard of any certificate authority offering that.
It the beginning of HTTPS you were supposed to look for the padlock to prove if was a safe site. Scammers wouldn’t take the time and money to get a cert, after all!
So certs were often tied with identity which an IP really isn’t so few providers offered them.
An IP is about as much of an identity as a domain is.
There are two main reasons IP certificates were not widely used in the past:
- Before the SAN extension, there was just the CN, and there's only one CN per certificate. It would generally be a waste to set your only CN to a single IP address (or spend more money on more certs and the infrastructure to maintain them). A domain can resolve to multiple IPs, which can also be changed over time; users usually want to go to e.g. microsoft.com, not whatever IP that currently resolves to. We've had SANs for awhile now, so this limitation is gone.
- Domain validation (serve this random DNS record) involves ordinary forward-lookup records under your domain. Trying to validate IP addresses over DNS would involve adding records to the reverse-lookup in-addr.arpa domain which varies in difficulty from annoying (you work for a large org that owns its own /8, /16, or /24) to impossible (you lease out a small number of unrelated IPs from a bottom-dollar ISP). IP addresses are much more doable now thanks to HTTP validation (serve this random page on port 80), but that was an unnecessary/unsupported modality before.
Nope. That is not correct. https://1.1.1.1/dns-query is a perfectly valid DoH resolver address I've been using for months.
Your operating system can validate the IP address of the DNS response by using the Subject Alternative Name (SAN) field within the CA certificate presented by the DoH server: https://g.co/gemini/share/40af4514cb6e
Pretty much that. You set up a bootstrap DNS server (could be your ISPs or any other server) which then resolves the IP of the DoH server which then can be used for all future requests.
Firefox accepts a bootstrap IP, or uses the system resolver:
> network.trr.bootstrapAddress
> (default: none) by setting this field to the IP address of the host name used in "network.trr.uri", you can bypass using the system native resolver for it. Use this to get the IPs of the cloudflare server: https://dns.google/query?name=mozilla.cloudflare-dns.com
> Starting with Firefox 74 setting the bootstrap address is no longer required in mode 3. Firefox will attempt to use regular DNS in order to get the IP address of the trusted resolver. However, if DNS resolution of the resolver domain fails, setting the bootstrap address is again necessary.
Funny. I was configuring a new domain today, and for about 20 minutes I could only reach it through Firefox on one laptop. Google's DNS tools showed it active. SSH to an Amazon server that could resolve it. My local network had no idea of it. Flush cache and all. Turns out I had that one FF browser set up to use Cloudflare's DoH.
My (Unifi) router is set to automatic DoH, and I think that means it's using Cloudflare and Google. Didn't notice any disruptions so either the Cloudflare DoH kept working or it used the Google one while it was down.
Yes, this exactly - I wouldn't call it nitpicky, it is really buried in there. I understand Cloudflare has a ton of other products and features, but the discoverability for CF Tunnels really could be better.
Just checked and it's:
Dashboard home > Zero Trust > Networks > Tunnels > [tunnel] > Public Hostname
And if it ends up provisioning a new DNS record, I always have to remember to go back to the domain's DNS screen and label it with the tunnel.
In general I use a tiny silver of Cloudflare's capabilities; it would be nice if the primary dashboard could bubble up the parts that I do use.
I can't say anything about the specifics of this treatment, but in terms of their ability to fully benefit from hearing, it would depend on when they became deaf, and the severity of their deafness.
If they were born deaf, or lost hearing as a young child during the language development stage, then it would probably be a long adjustment. Things would just be noise and it would take a lot of training to distinguish sounds, speech, etc. And unlike a cochlear implant, you couldn't just take it off to give your brain a rest.
If they had hearing loss later in life, or some residual hearing, then they probably have a better chance of re-adjusting to hearing.
We actually talk more now which helps, but it is still hard to keep up when everyone is barreling ahead doing their own thing. In addition to more talking, there needs to be a semblance of strategy that everyone is aligned on and understands their role in.
A high-agency, high-functioning team has always been a superpower, but mastering this capability is what will make or break organizations that are trying to run lean with AI. It's a "people problem" at its core, and no amount of technology can fix that.
reply