Hacker Newsnew | past | comments | ask | show | jobs | submit | more deepstack's commentslogin

Yeah that's tech for you! E4X was a ES standard, but Mozilla dropped it in 2014, then FB pick it and made JSX and now that is the defacto "Front-End" development.


would also like to hear from DDG ppl. On another note, Really appreciate DDG's map functionality using apple map with iaxm query parameter set "maps" and q set to whatever you want to search for:

https://duckduckgo.com/?q=Africa&iaxm=maps

It is refreshing to be able to use a usable map on tor without having to accept google big term of service.


>As your system gets bigger, and your teams get bigger, the value of a typed schema has a multiplicative effect on both reliability and productivity.

I often see this in arguments using in TypeScript vs plain JS debate. Is this really true in practice? Be good to get some more examples on this.


It's extremely well documented


web pages do NOT need cookies or javascript period.

and if you insist on that your users must use javascript to view your web site, then cookie is not even needed, one can do some finger printing and there is your permanent cookie!

This development of web has many tech savvy people stop using internet on their phone and switched back to old dump phones (and this comes from someone who has being promoting JS since before nodejs came along). Please bring back web pages that does not require cookie or javascript to function, a web page that can work in the terminal browser like lynx or links2 is the base standard for web page. JS and cookies are just icing on the cake not needed.


Web pages don't need javascript? Get real. You have to use javascript for your web app if you want to deliver a reasonable product experience. This is a dead argument I wish people would just stop making. The ship sailed over a decade ago.


You can have a really really good experience with how using JavaScript, but it does have very genuine uses.

The vast majority of the web really honestly doesn't need it though, like this website.


Without cookies - and knowing that HTTP is a stateless protocol - how do you suggest we implement a solution that allows me to connect on my bank to view my account balance and make a wire transfer from the comfort of my home ?


The way we did that kind of thing before cookies was by encoding the state in the URL (for GET requests) or hidden fields (for POST). Higher-level Web frameworks at the time would often abstract this away from the app code while allowing the app to be deployed in either mode. Here's an example for ASP.NET: https://docs.microsoft.com/en-us/dotnet/api/system.web.confi...


If you encode auth token in the URL then a shared URL accidental or otherwise means being authenticated. There is a lot of existing infrastructure that assumes the URL is public knowledge while cookies are not.

If you do this through hidden forms then page navigation can no longer be done through hyperlinks and must now all be form submissions, which means a malfunctioning back button and logout when refreshing or opening a link in a new tab.

Please do not do this.

First party cookies are very useful and it's bad enough that people keep trying to replace them with javascript+localstorage despite the decades of security best practices that have been built into them.

I do agree that we can do away with third party cookies however.


FWIW I'm not suggesting that people do any of this today. But it is how it was actually done - it wasn't the case that advanced web apps that required per-client state management weren't possible at all without cookies.


exactly! web developer in the late 90's did that and worked fine. Yeah it is a bit more pain to use, however it keeps web stateless which was a what it suppose to be. If you want to make your fancy desktop stateful programs do it with something that is not document base like html.


It didn't keep the web stateless, though. It implemented state on top of what we had at the time, with certain flaws that others have already pointed out in this thread.


I worked on framerworks like this. Heck, I built a (horrible, internal only) one once.

The issue with this was that it broke direct linking to pages, or created such ugly and long URLs that they were weren't "sharing" friendly.

It was horrible, and cookies were (and remain) a much better solution.


You could do that using TLS client certificates, but they have terrible UX that browser vendors are only making worse over time.

There are also other auth mechanisms natively supported by browsers, like HTTP Basic Auth that can be used.


Basic authentication ?


that is fine. It is how user suppose to login to an authenticated area.


I'd rather not throw the cool internet baby out with the ad tech bathwater, thanks.


I always wonder what specifically marked the turning point for web usability. Too many dark UX patterns, popups, ads and extremely heavy frameworks for extremely simple sites.

I feel that we got to a point where the community put out a couple recipes for deploying websites and everyone just jumped on it irrespective of the problem, mimicked the same patterns (email newsletter popups, ads, google analytics, 30 million external assets, etc) and called it a day.

While I think your opinion is a bit extreme, I agree that there are so many things that we don’t need, but companies are hiring and new devs are copying the recipes.


The web is SO much more usable now than at anytime earlier.

For example, browsers used to actually allow pop-up windows (as in it would open a new desktop window (not a tab), sometimes off-screen). And then when you closed it the browser would let it spawn more pages.

There was a common pattern of spamming popup windows with a "close" button the same place a "run" button was when you downloaded an executable in Windows 95.

And then after closing 10 windows with the button in the same place they'd hit you with an executable download.

This was common.


I agree and I remember those days. However, I think there is a distinction between browser improvements vs web usability.

I’m moreso talking about _how_ people design sites for the web and how poor design choices lead to poor usability. I think the proliferation of tools that require not too much in-depth knowledge (react), heavy CSS frameworks and large client side JS libs have ruined a lot of the web surfing I used to enjoy


> I always wonder what specifically marked the turning point for web usability. Too many dark UX patterns, popups, ads and extremely heavy frameworks for extremely simple sites.

It's funny that you include popups in that, because actual popups are almost nonexistent today, whereas they were ubiquitous in the early 2000s web.


Cookies are actually incredibly valuable as a place to store web auth tokens where JavaScript cannot get access to your valuable user information.

No matter what happens, if I store my JWT in a no JavaScript cookie, it's safe. Nowhere else on the web is safe in that way.

I also feel like it's a mistake too Tell people to use fingerprinting instead of cookies, when users actually have control of cookies, it's almost always better for them If we use them instead of fingerprinting.


Not just HTTP-only cookies (no JS), but also the control for 'Secure' (HTTPS only) and 'SameSite' for CSRF blocking. Not using cookies and storing your auth tokens in other places is a rookie mistake.

That said, 3rd-party cookies should be blocked by default. IDPs and other exceptional cases can request permission or use one-time query param hashes to exist without them.


Outside of fraud and security mitigation I have yet to see a legitimate use case for fingerprinting.

I very much do want the bank to fingerprint the hell out of my devices, front page of my local newspaper? Nope.


How about using browser-supported auth mechanisms instead of manually reimplementing auth using cookies? There is HTTP basic auth, or TLS client certs and probably more.


Basic Auth with digests has some issues, but both that and client certs mostly fail because of poor browser implementation.

Client certs would really be ideal if browsers handled them better and sync'd the certs between devices (like bookmarks), but I guess that still wouldn't solve the signing in from a new/different (non-synced) device.


You could offer one-time rescue codes (like the ones used when people have TOTP auth) to let people add new certs on new devices.


Because both of them work terribly.

I've used basic auth in production. It has terrible user experience, and unfixable issues and limitations.

I wish browser-integrated logins had taken off, with Persona, but that was unfortunately abandoned.


Why are browser vendors so terrible at auth stuff :(


JavaScript there are some serious discussions to be had, but cookies are a very nice way of adding state to HTTP.


interesting, do want more information on this. Can someone who work for Mozilla care to comment on this? If this is true, then I would rather be using brave or ungoogled chrome build than Firefox.


The big advantage of using Firefox that you don't get with Brave or UnGoogled Chromium is that you are supporting browser engine diversity.


To what end and through what means, though? Do you really think Mozilla would never do anything nefarious with the data that they collect? They would never, ever, swear on their mother's grave, sell that data to a third party, is that what you believe?


You're replying to a comment about browser engine diversity, which is largely decoupled from privacy.


No, it's not true, anyone can opt out of telemetry at any time via about:preferences, you don't even need to go to about:config https://support.mozilla.org/en-US/kb/telemetry-clientid


however that fact that it is NOT using V8 is quite refreshing change. We need diversity in the JS eco system other than google v8. Didn't realize JavascriptCore is faster than v8.


To be fair they only said it starts faster than V8, nothing was said on runtime performance


This is the same weak argument that's being used to sell native compilation on Java. Tons of graphics showing startup time improvement, never a word on what happens after. But the only cases where this metric makes a real difference are CLI utilities and Serverless (if you know others, please tell me)


That’s not true, as far as I've read. I feel it is spelled out pretty thoroughly: Startup time and memory footprint are way lower, but peak performance is also lower: JIT outperforms static compilation due to it having more information. There are some tools to gather such information by running the program for a while, gathering stats, but JIT still outperforms.


Careful there. The JavaScript ecosystem is nothing if not diverse...


yeah what happens if a few years down the road, the company decided to close the free accounts? To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.


> yeah what happens if a few years down the road, the company decided to close the free accounts?

I guess they'll move on somewhere else? Likelihood seems low, impact seems low. Why spend energy on it?

> To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.

Ok but that's what you're interested in. Most people aren't into that.

I don't get why you'd expect all projects to be focused on your particular hobby interests?

Maybe I love typography. Why isn't this project paying more attention to the typography in their website damnit!


>I guess they'll move on somewhere else? Likelihood seems low, impact seems low. Why spend energy on it?

many services don't really migration of the data. Why would I want my data stuck some where?


> Why would I want my data stuck some where?

Nobody wants their data stuck - people assess the likelihood of that as low and the impact as low, so rationally don’t care about it.

If one of my personal projects was unilaterally deleted right now by GitHub it’d be annoying to lose my issues but I could recover ok. And I don’t think it’s likely anyway, so why worry?

People only have so much energy to spend worrying about things. Most would spend it building instead.

Can’t be that hard to understand?


Or instead use specialised communities that does do moderation like peer tube, odysee.


Bingo, we are just bunch of bacteria walking around. Some organism become symbiotic over time like mitochondria, so it go incorporated into our cells to make energy. The whole concept that we as human-being is one organism needs to be revisited.


This is already understood, it's not like scientists aren't aware.

The simplified concept of humans being one organism will continue to be taught nonetheless, because it's extremely useful, and not even wrong in most contexts in which it's applied.

There's nothing special about this, you can say the same thing about any simple model in biology: that the brain is an organ inside the head is a simplification, any diagram of a homeostatic system or metabolic pathway that fits on a single page is a simplification, the "central dogma" of DNA -> RNA -> protein is a simplification...

All of these things are well-known. They continue to be used as models, because, well... they're useful models. And there's nothing wrong with that!


Even more generally, reductionism works but with known limitations which warrant a more holistic approach, and we can't work our way out of most real-world problems without this multi-layered approach. Welcome to empiricism in complex systems.


All I want is something that is NOT owned by a big corp, especially NOT MS, Google or Amazon.


What happens if/when the new site becomes a "big corp"?

How "big" is "too big" to you?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: