A prompt injection solution that seems to benchmark better than any other approach out there, while not using hard-coded filters or a lightweight LLM which adds latency.
I understand your reasoning, but in practicality, I don't think this is true. This would be true if companies though with a coherent set of incentives. Instead, individual incentives are at-play here.
If a company is paying for a recruiter, it usually means:
- It isn't highly cash constrained
- Values the time of its IC's, managers and HR more than the fee
- Valuation for the role is not cost-based, but value-based
- Only at the penny pinching startup stage is the recruiter fee a real factor in a multi-year investment that should be yielding a high return. Beyond that, the bias evaporates and the real incentives lie with individual incentives, and available budgets.
I see Massachusetts as sort of the non-insane liberal counterpoint to California.
Things work here and nobody seems to be passing the "oops my unintended side effects and clueless regulations messed things up horribly." Or, if they do, it is at something like 1/10th the level.
We didn't start warning label spam everywhere. We don't have weird propositions that are causing run-away housing prices. There aren't bar codes on our 3d printers, or cookie banner requirements on every website. Well, ok we do, but that nonsense all came in from other places.
We did pass laws to lower PFAS/PFOAS. That seems reasonable. Government can work.
MA legislature is too busy enriching themselves with back room dealing to f the state up too much.
I wish I was joking. They get audited yet? Pretty sure that was a ballot measure that passed by a huge margin years back and last I checked they were stalling...
> We don't have weird propositions that causing run-away housing prices.
Most of those are a reaction rather than the cause. People want to move to california, it creates a different set of problems for california vs Massachusetts
I mean, sure, but all those things I named don't seem to be scale induced? They seem to all stem from clueless regulation, which is as simple as not not signing silly laws? I'm missing where scale plays into the items I mentioned.
I do not believe I am a pattern of linear algebra. I believe like the majority of humanity historically that I have a soul, a spiritual and non-physical reality, my personhood comes from my soul, and that as such, AI is fundamentally incapable of consciousness.
I also believe, as a result, it will be great fun watching researchers burn the next 30 years trying to understand what is missing. We’re going to find out very soon if the soul is real, when for all our progress we can’t create one.
Only those completely embedded in materialism need fear a conscious AI.
> I believe like the majority of humanity historically that I have a soul
It seems that your position is that the frequency of a belief across human history determines truth?
For large swaths of recorded history, earth was considered the center of the solar system. Given your reasoning, I should expect that is a belief you hold?
Is it possible that popularity of an idea is not a good measure for factuality?
Interesting that you label someone with a belief different than yours as delusional and whose views on the matter should not be respected (I’m assuming that’s what you meant by “feelings”).
> I believe like the majority of humanity historically that
Historically, lots of humans believed in lots of things that turned out not to be true. Believing something doesn’t make it true, as I’m sure you are aware, given your “those people are delusional” comment.
For what it’s worth, I’m not suggesting LLMs are or aren’t conscious. What I know is that the hard problem of consciousness is still very much not resolved, and when I asked the parent question my hope was that those that strongly believe LLMs are not conscious would educate me on the topic by presenting the basis for their reasoning.
I push back on the framing that this is just "a different belief." Every metaphysical framework except strict materialism rules out AI consciousness. Dualism, idealism, most forms of panpsychism, every major religious tradition. Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.
When someone tells me linear algebra might have feelings, I don't think "delusional" is unfair. I think it's the natural response to a claim that only works if you've already accepted the one framework that can't account for the very thing it's trying to explain.
> Every metaphysical framework except strict materialism rules out AI consciousness
As I understand it, this is a very broad, and ultimately false claim. Panpsychism is definitely compatible with the idea of AI consciousness, as is functionalism, neutral monism, and others. Even some forms of idealism make AI consciousness metaphysically possible, since reality is fundamentally mental and the biological/artificial distinction is not ontologically basic (whether AI systems instantiate genuine centers of experience depends on the specific theory of subject formation within that idealist framework).
Not yet! I use cmd+w for closing the window and cmd+q to quit, I try to keep focus on one file at a time. If enough folks ask for it, I'll add that in :).
What did you use to record the video on the home page, if you don't mind me asking? I need to do something similar. One tip I've seen is to record at a higher resolution than you need, then scale down. The demo is good, but looks a little grainy at points, FYI.
Of course! I have looked into many tools to be able to do that properly. This one was done with Screen Studio, which is one, and there is another one that is nice and open source: Cap - but a bit less features.
Not everyone is paying for LLMs, even now. So I think it is perfectly reasonable to assume good intentions, here.
Someone spent their own tokens to ponder your code and thought they'd share the result. For anyone else looking, like me, I can see that this is probably going to come up relatively clean without having to spend my own tokens, or install it, and I'm more likely to, now that I can see that.
Sorcery - open source app and protocol that, together, let you share source code links that open in each user's favorite editor, right on the linked line.
Supports VS Code, Neovim, IntelliJ/JetBrains Family, Zed, etc.
About to do the first beta release this later this week.
This was very true of the dotcom bubble. The entire "web" was new, and the promise was everything you use it for today.
Pets.com was a laughing stock for years as an example of dotcom excess, and now we have chewy.com, successfully running the same model.
Webvan.com, was a similar example of "excess" and now we have Instacart and others.
I looked up webvan just now--the postmortem seems relevant:
"Webvan failed due to a combination of overspending on infrastructure, rapid and unproven expansion, and an unsustainable business model that prioritized growth over profitability."
The problem of dotcom is we needed a cultural shift. I had my first internet date during the dot com bubble and I remember we would lie to people about how we met because the idea sounded so insane at the time to basically everyone.
In 1999 it seemed kind of crazy to even use your real name online let alone put your credit card into the web browser.
Put your credit card into the internet browser then a stranger brings you items in their van? Completely insane culturally in 1999. It would have sounded like the start of an Unsolved Mysteries episode to the average person in 1999. There was no market for that in 1999.
The lesson I take from dotcom is we had this massive bubble and burst over technology that already existed, worked flawlessly and largely just needed time for the culture to adapt to it.
The main difference this time is we are pricing in technology that doesn't actually exist.
I can't think of another bubble that was based on something that doesn't exist. The closest analogy I can think of is the railroad bubble but with the trains not actually existing outside of some vague theoretical idea that we don't actually know how to build. A bubble in laying down rail because of how big it will be when we figure out how to build the trains.
The only way you would get a bubble that stupid would be to have 50-100 years of art, stories and movies priming the entire population on the inevitability of the train.
I don't get it. Nobody blinked twice about getting into a car with a total stranger before Uber either — taxis have been around for well over a hundred years. It's not exactly a huge cultural change, just more efficient and convenient.
I understand training is still costly, but it's not unimaginable for it to turn profitable as well if you think believe they'll generate trillions in value by eliminating millions of jobs.
If you eliminate ONE job and let's say the job pays $100K, in theory at most $100K goes instead to AI revenue. In practice it's a lot less, nobody is going to move everything to AI if it's just a 10% saving.
So, to get a trillion in value, you'd have to eliminate many tens or even hundreds of millions of jobs.
I don't believe this has been the case or claim at all. At best they have recognized some limited use cases in certain models where API tokens have generated a gross profit.