I worked with a guy who fits this description. If there’s something he can reinvent poorly, he’d do it. A big part of what I did while working on the same team as him was slowly chipping away at his weird domain by introducing standard tools. I always had to advocate for them in such a way that they were solving a different problem, and then slowly work towards a point where some of his homegrown stuff became unnecessary.
As to the job security idea: the only people who do this are people who aren’t good at creating real value, so they have to try to create niches where they’re needed.
My personal theory is that Helm may be ok for distributing a pre-packaged solution to other people. Then people mistook it for a tool that should be used in-house to deploy a company’s own systems, where it makes much less sense.
It makes absolute sense. You can use no variables and still deploy helm chart. It is a directory of plain old yaml objects. And add customization when you need as you evolve. Good luck doing that with kustomize.
Why do you believe that humans have access to an “internal thought process”? I.e. what do you think is different about an agent’s narration of a thought process vs. a human’s?
I suspect you’re making assumptions that don’t hold up to scrutiny.
I made no such claim and I don't understand what direct relevance you believe the human thought process has to the issue at hand.
You appear to be defaulting to the assumption that LLMs and humans have comparable thought processes. I don't think it's on me to provide evidence to the contrary but rather on you to provide evidence for such a seemingly extraordinary position.
For an example of a difference, consider that inserting arbitrary placeholder tokens into the output stream improves the quality of the final result. I don't know about you but if I simply repeat "banana banana banana" to myself my output quality doesn't magically increase.
Given that LLMs can speak basically any language and answer almost any arbitrary question much like a human would, the claim that LLMs have comparable (not identical) thought processes to humans does not seem extraordinary at all.
What does that mean though, to “have access to our underlying thoughts”? Humans can obviously mentally do things that are impossible for a language model to do, because it’s trivial to show that humans do not need language to do mental tasks, and this includes things related to thought, so I don’t really get what is being argued in the first place.
Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.
> Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?
I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.
> Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?
Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)
We sell to enterprise customers. All of them deploy our solution on internal cloud-style VM clusters. We use the Traefik ingress controller by default.
There really weren't any particular surprises or gotchas at that level.
In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.
It's not news that if you just give all developers at a company write access to the production databases, owner permissions on all resources, etc. that velocity can be increased. But at what cost?
The reason we don't do that in most cases is that "move fast and break things" only makes sense for trivial, non-critical applications that don't have any real importance, like Facebook.
There's thousands of small and medium business though. They have maybe one true CRM, and a dozen spreadsheets/files floating around that would benefit becoming proper apps. People delete spreadsheets all the time!
Sure don't give an LLM agent write access to the modeled CRM that took months/years to build.
But turning a spreadsheet into an app in a few days? By giving the LLM proper read/write capabilities for velocity? I think the case is there for it. Right tool for the right job.
I think the argument would be mostly about the companies where such trivialities like proper auth were given up to maximum possible extent. I'm sure even some bigger ones are only gnashing their teeth over implementing security measures that are required by law and not seeing much point to it.
The whole point about theory, though, is that simple rules can define complex phenomena. I don’t think anything you wrote fundamentally rules out the idea that we could find a theory of deep learning.
Calling it “a product of human engineering” is misleading. Deep learning exploits principles we don’t fully understand. We didn’t engineer those principles. It’s not fundamentally any different than particle physics or biology, which are both similarly consequences of rules that we didn’t invent and can’t control.
I suspect all Minsky did was reinforce what many people were already thinking. I experimented with neural nets in the late 80s and they seemed super interesting, but also very limited. My sense at the time was that the general thinking was, they might be useful if you could approach the number of neurons and connections in the human brain, but that seemed like a very far off, effectively impossible goal at the time.
As to the job security idea: the only people who do this are people who aren’t good at creating real value, so they have to try to create niches where they’re needed.
reply