Every time you request something (a web page, an image, a css file, etc) from a server, your browser sends any cookie data that had previously been set by that server as part of the request header.
Cookies can be set for specific subdomains only, but if they are set for the "unprefixed" domain they will also be sent for all subdomains (just the nature of how browsers handle cookies).
Since cookie data is rarely needed just to server static assets (images, css files, etc), you can shave off some time on each request if you serve them from a subdomain that is different from the web page's subdomain... but of course your web page has to actually be at a subdomain (e.g. www.example.com) in order for this to work.
Hence, setting up your main web pages to be at a subdomain (such as www.) gives you the ability to then serve static assets from different subdomains without browsers having to send cookie data on each request.
Cookies are scoped to a domain name and all subdomains under that domain name. For example, if you set a cookie on `ycombinator.com` that cookie will be presented to `news.ycombinator.com` as well.
Using `www.example.com` as your cookied domain allows you to avoid sending your cookies to `cdn.example.com`, shaving off a few bytes of incoming bandwidth per request. Whereas if you use `example.com` as your cookied domain, those cookies will be passed to `cdn.example.com`. To avoid that, you'd have to set up your CDN on a completely different domain like `examplecdn.com`.
Your comments in this thread break the HN guidelines by being uncivil. We ban accounts that do this, so please (re-)read the site guidelines and only post civilly and substantively from now on: https://news.ycombinator.com/newsguidelines.html
regarding the architecture: basically is (or is very much like) event sourcing, right? I mean, kind of like an append-only log, that will grow forever.
Yes, that's right, at least at the source of truth on the server. Clients automatically rewrite or squash their local history a little bit prior to pushing it to the server to minimize log length and to avoid showing intermediate editing states that were never communicated to anyone else (just like you might squash git commits), but every important state is preserved in the log.
> Yes, that's right, at least at the source of truth on the server.
I assume the server holds human-readable issue id's, but clients do not then (until they are pushed to the server).
Do you assign just random guids to issues on the clients and then issue running issue numbers to them once they are on the server? If so, if I enter two issues on the client side with a reference from one to the other, how do I enter the cross-reference before I have the (server issued) issue id for either issue?
ok tnx!
as a dev myself, i'm still not convinced about this whole append-only/time-machine approach (i'm talking more about event sourcing in general, not your specific architecture). it smells too much architecture astronaut (see http://www.joelonsoftware.com/articles/fog0000000018.html and http://www.joelonsoftware.com/items/2008/05/01.html).
it feels like this incredibly heavy monster system that theoretically may let you do anything, but in practice, at the end of the day, is not able to do it because it is impractical to use.
again, not necessarily yours.
Except for the original game, which used procedurally generated galaxies, and didn't feature Earth. This was the only way to get the game to fit into the 32K RAM of the original BBC Micro.