Hacker Newsnew | past | comments | ask | show | jobs | submit | code_sloth's commentslogin

Do you expect all memes and jokes to be absolutely harmless and 100% politically correct for every nation/culture/race/gender/<group>?


When doing technical or business work - YES

If for entertainment then things can be unsettling or tuned to one group.


I really do not want to work with the person(s) who downvoted this. These sort of 'jokes' are part of what makes a hostile work environment.


Luckily, this is an internet forum and you don't actually need work with that person.


> I really do not want to work with the person(s) who downvoted this. These sort of 'jokes' are part of what makes a hostile work environment.

So does oversensitivity and overreactions.


You might ask a lot of people in minorities etc if this is an overreaction to "banter".

I have not been on the end of this being a white male.


I expect memes not to be blatantly racist, particularly when they are attached to a technical article.


It's only racist if you are a racist. You brought up skin color where it doesn't matter. Work on yourself.


You might be genetically blessed, others might be genetically cursed.

Some are genetically predisposed to becoming fat. Its still a fact that they are fat. And being fat comes with associated health risks.


Right, sure. But we don't make women who are generically pre-disposed to have breast cancer feel bad for being genetically pre-disposed to breast cancer. We don't call their bodies disgusting because of that risk, do we? Do we say that it's their fault? That they're lazy, have no self-control? That the "breast cancer pre-dispositon acceptance movement" is wokeness out of control?


The genetic cause doesn't explain why ratio of obesity went from 10% to 60% in one generation. Fat people didn't have that much more kids.


It's likely the result of genetic, epigenetic, cultural, economic, and food system factors combined.

As for how it went from 10% to 60% (your numbers, idk if they're right) one piece of the puzzle is that in 1998 the definition of obesity was changed, which made MANY more people fall into the obese category who did not previously: http://www.cnn.com/HEALTH/9806/17/weight.guidelines/


Definitions aside, look at the video from woodstock 69 -- not even one fat person. Look at a video of a current event in the US.


Ahh for sure, that’s a demographically representative sample of the american population. /s

Yes people have gotten fatter. That’s a thing that’s happening. To me that’s evidence of systemic problems in our food supply, economic factors like poverty. None of which are reason to treat someone poorly for being impacted by that.


According to that article, fully 50% of Americans were already considered obese at the time.


This seems a slow death in the long term to me.

1. Customers that balk at this cost will switch to other cheaper/free vendors.

2. Oracle imposes even more onerous costs to maintain revenue.

and repeat.

At some point either Oracle gives up, or they attempt to extract revenue from vendors as well. The latter will probably result in a some kind of fork.


The customer base is very large companies who find having someone to sue reassuring and are likely to be institutionally incapable of running their own fork of java. Seems fine to me.


Well there is also the fact that the openjdk codebase is huge and so even though it is quite well written and documented fixing a bug in the jdk (and especially a bug in hotspot) isn't easy. Although tbh i have no idea how the quality of support is with Oracle.


Both of my last 2 large bank gigs (kind of the last places you'd expect cutting edge tech) were going all in on Kotlin. New projects were Kotlin only, and there was active work on sunsetting/migrating Java applications towards Kotlin. None of these were Android applications.

Sure, this is anecdotal. But I'd say the same of Java's dominance in the JVM space. Java's continued dominance is not a sure thing from my vantage point.


JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

Groovy was all the adoption rage across German JUGs back in 2010, then everyone was going to rewrite the JVM in Scala, or was it Clojure?

Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

https://trends.google.com/trends/explore?q=%2Fm%2F07sbkfb,%2...


> JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

This is less relevant today. The host blessed languages do have an advantage, but I would not say it is insurmountable. It might have been the case in the past, but the modern JVM is a platform, it is no longer a glorified Java language interpreter.

> Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

Maybe. Maybe not. Most developers I talked to that have experienced the transition do not want to go back to Java.

This isn't to say Java will die. It will continue to thrive. But Java dominance (on the JVM or as a whole) isn't a sure thing anymore.


> Do you really want to copy from a development computer to your production? ...

> Are you really sure that everything works exactly the same on different versions of GoLang? ...

He mentions he does have a build server which runs a 10 line shell script to download code and build the binary.

Builds happen on that server, and I assume it handles deploying the compiled binary (and systemd script?) to the target as well.

The build server would also have a "blessed" golang version. New guy code that uses new not-yet-blessed features would not compile.

> That VM with systemd died at 1am...

Your docker host died. All your containers die along with it. Docker alone cannot solve this category of issues anyway.


But you add k8s now, so that three hosts can orchestrate three other hosts. Success!


The infrastructure layer is expanding, to meet the needs of the expanding infrastructure layer


Devops is the new self-justifying expanding bureaucracy of the tech world.


k8s isn't the only option, Nomad is a much better fit here.


Same thing is achievable with HA VMs.


And that would be much easier to implement and understand.

We seem to be obsessed with producing the most complex machines we can instead of creating simple, understandable machines.


Now you have 3 problems.

It's not solving "your system went down", it's adding more layers of system that can go down.


Except now your system can handle the inevitable server going down without taking the entire site offline. It DOES solve the problem of a single host failure causing an outage. Yes, there are other types of outages you can have, but it certainly does reduce the occurrence of outages significantly.

Are you really trying to suggest that people can't use Kubernetes to increase their reliability?


Yeah, I guess I am. It's adding whole layers of complexity and configuration to the system. I understand that those layers of complexity and configuration are designed to make the system more resilient, but it depends on everyone getting everything right all the time. The "screw-up surface" is huge.


Ever seen a large system that has it's own job server and scripts for orchestration/deployment? Application code that checks the status of it's peers and runtime env to determine what should run? All glued together with decades old perl and bash with no documentation.

I'll take "more configuration in yaml" over that.


that's not a 1:1 comparison though.

Leave your nice clean K8s deployment paradise to cruft up for decades, and will it be any better? I doubt it - there'll be old Dockerfiles and weird bits of yaml that shouldn't work but do, and upgrading a version of anything will break random things.

So yes, I think I would prefer the decades of crufty perl and bash to decades of crufty outdated yaml. At least the bash scripts have a hope of doing what they say they do, and are likely to still execute as intended.


Hum... No, Kubernetes is not a HA solution.

One can certainly create an HA cluster over some infrastructure set up by kubernetes, just as well as one can take a bunch of physical servers, set them up by hand, and create an HA cluster with them. K8s isn't adding anything to the availability.


> Docker alone cannot solve this category of issues anyway.

Docker does comes with an orchestrator out of the box, it's called Docker Swarm. You may not use it, but it's there and it's up to you to use it or not. It's extremely simple to setup, a single command on the manager and another one on the worker. It support healthcheck, replication, etc.... all super simples to setup too.

Sure doing all theses will takes, what 30 minutes? Instead of the 5 he took for his deployment, but it does solve that issue, natively, out of the box.

Oh and my docker image always have the "blessed" [insert environment here] version, thus everyone always use it while testing locally. If you need to update it, anyone can do it easily, without any knowledge of the build server environment, nor any special access to it.


If you judge the coding style of the early 1970s with modern standards, it isn't going to look great. 50 years is a loooong time.

dmr was one of, if not the first C programmer(s).


And he was almost certainly using ed(1) as his editor and a mechanical teletype at 7.5 or 10.0 characters per second as his terminal...

The C language (and all of Unix) was designed to be very terse as a consequence.


I try ed(1) once in a while and I find it pretty usable (and even rather efficient if you know what you are doing). cat(1) also works if you just want to type a new text.

Speaking of terseness, I love the the fact that C does not have 'fn'.


We used to speak of a great Unix systems programmer as someone who could write device drivers with cat and have them compile and run the first time.


Before I look up `man cat`, what can you do with `cat` other than just see what's in a file?


When not given a file, cat will just read from stdin, so you can use "cat > file.c", write some text, and send EOF with ^D when you're done.

Obviously, there's no way to go back and edit anything mid-stream, you have to write the whole thing out in one shot.


The backspace does work within the line.


If your terminal is in line-buffered mode.


You can join files together.

    $ cat foo bar > baz
will join the files foo and bar together into a single file called baz


That period lasted what, 10 years after Unix was created? And we'll be stuck with those decisions decades if not centuries.

Similar story with the design of QDOS / MS-DOS / Windows and nowadays with Android. Both designed for super underpowered machines that basically went away less than a decade after they were launched and that will be hobbled because of those early decisions for a long, long time.


We will be hobbled with these decisions for a long time precisely because the complete package of trade offs the designers made were so successful.

If they had gone for wart-free on properly powered hardware, they would be stuck back in Multics land or living with the gnu Hurd—cancelled for being over budget or moving so slowly that projects that actually accomplish what the users need overtake them.

Do I wish that C had fixed its operator precedence problem? Sure. But the trade offs as a total package make a lot of sense.


Some would say,

https://multicians.org/history.html

Instead we pile mitigations on top of mitigations, with hardware memory tagging being the last hope to fix it.


Is there an explanation on why C’s operator precedence is weird? Such as: why does the bitwise AND have higher precedence than logical AND?


There is, and it is amusing

“In retrospect it would have been better to go ahead and change the precedence of & to higher than ==, but it seemed safer just to split & and && without moving & past an existing operator. (After all, we had several hundred kilobytes of source code, and maybe 3 installations....)“

https://www.lysator.liu.se/c/dmr-on-or.html


> why does the bitwise AND have higher precedence than logical AND?

Why is this precedence weird? Bitwise AND tends to be used to transform data while a logical AND tends to be used for control flow.


I meant equals having a higher precedence than bitwise AND.

As in:

    if (x & 2 == 2)
...is actually parsed as:

    if (x & (2 == 2))
...which isn’t intuitive.


See the above example from dmr himself


> that will be hobbled because of those early decisions for a long, long time.

Perhaps this is why a programmer would want to rewrite a system & tout "funny success stories" about the effort & results?

https://news.ycombinator.com/item?id=25844428

> Why couldn't you just upgrade the dependencies once then set up the same CI/CD you're presumably using for Svelte so that you can them upgrade versions easily?

Because the existing system was painful & time/energy intensive to upgrade. It happens with tight coupling, dependency hell, unchecked incidental complexity, architecture churn, leaky abstractions, etc...

Maintenance hell & > 1 month major version upgrades tend to occur with large, encompassing, first-mover frameworks, often built on a brittle set of abstractions, as they "mature" & "adapt" to the competition. e.g. Rails, Angular...


Was it different for the designers of ALGOL/SIMULA/Pascal?


Yeah, those languages were IIRC designed to be edited offline (as a deck of punch cards) and submitted to a mainframe via high-speed card reader as a batch job.


Very interesting when you think about it. A language created in 2009 (Go) owes its syntax to a language from 1969 (B), and the latter looks like it does because it was designed during a short transition period between offline editing (1960s) and electronic terminals (1970s).

And there are people claiming that computer scientists are not conservative :)

To what extend this explanation is correct is another question... The article by Denis Richie says "Other fiddles in the transition from BCPL to B were introduced as a matter of taste, and some remain controversial, for example the decision to use the single character = for assignment instead of :=".

It's a kind of butterfly effect :) Mr. Richie prefered "=" over ":=" and fifty years later a server crashes somewhere because somebody wrote a=b=c instead of a=b==c.


Actually the transition from "offline editing" and "electronic terminals" was not short at all. Teletypes (aka "typewriters which can recieve content encoded in electricity, next to the user keyboard") date back way beyond computers, and were still in use in the 1980s (but evventually superseded by fax). Teletypes were cheaper, faster and more convenient then video terminals. Don't underestimate of having a printout of your session, especially when being online (i.e. connected to the mainframe or mini computer) is something valuable and your terminal is "dumb" and has no storage (except paper).


My first usage of a computer was on a printed teletype. My last such use was probably around 1985. They were around for a long time.


And for a lot of people, the lightbulb goes off once they realize what 'tty' stands for...


B was a descendant of Bootstrap CPL, a language never intended to be used for anything other than making a CPL compiler, really a butterfly effect.


> Imagine living in a society where saying either of these things is a banning offense.

These tweets HAVE to be taken in context. It would be disingenuous to take every single one of his tweet at face value without considering his influence and position. He's dog whistled enough in the past, and look where that led.

> .. what people can say in public

Twitter != public Twitter cannot prevent you from physically speaking your opinion

The very idea of "freedom of speech" translates pretty poorly in the era of social media. Giving someone the freedom to speak their mind is orthogonal from giving them the ability to instantly amplify and broadcast their speech.


I actually have family there, and they have literally transitioned from near poverty to upper middle class in 2 decades. This was pretty widespreadc, but gradually becoming forgotten as the younger generations grow up in comforts that their precessors never knew.

A lot of this is due to their education system. It can be outright propaganda at times, but it has proven very effective at increasing literacy nation wide.

They've also made internet access an strategic initiative. When you marry a literate population with (generally, wink wink) free access to information and a seemingly draconian government, you get a very interesting bunch of people.


In the "old" days, people had to actively look for or click through to get misinformation on platforms. Facebook's algorithm now shoves posts into user's feed, which is optimized for engagement. We're all seeing now where that has led, and it's not going to get any better if nothing changes.

Facebook keeps hiding behind the claim that they are a neutral platform when their own man made algorithm optimizes user feeds for ad/content delivery to maximize profit. Whether the developers meant to or not, that algorithm has actively spread and amplified misinformation, even polarizing entire nations.

If facebook cannot tame (identifying "truth") what they have made (their algorithmic feed), they should kill it. It does not matter if this taming is possible or not, because they do have the ability to terminate the algorithm.


Are there any ethnicity statistics (or estimates) on actual applicants to yale?

It's apples vs oranges when we compare to the US population.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: