Hacker Newsnew | past | comments | ask | show | jobs | submit | ericb's commentslogin

A prompt injection solution that seems to benchmark better than any other approach out there, while not using hard-coded filters or a lightweight LLM which adds latency.


Link? Or a description of your approach? Sounds interesting!


> biased to hiring a slightly worse applicant

I understand your reasoning, but in practicality, I don't think this is true. This would be true if companies though with a coherent set of incentives. Instead, individual incentives are at-play here.

If a company is paying for a recruiter, it usually means:

- It isn't highly cash constrained - Values the time of its IC's, managers and HR more than the fee - Valuation for the role is not cost-based, but value-based - Only at the penny pinching startup stage is the recruiter fee a real factor in a multi-year investment that should be yielding a high return. Beyond that, the bias evaporates and the real incentives lie with individual incentives, and available budgets.


I see Massachusetts as sort of the non-insane liberal counterpoint to California.

Things work here and nobody seems to be passing the "oops my unintended side effects and clueless regulations messed things up horribly." Or, if they do, it is at something like 1/10th the level.

We didn't start warning label spam everywhere. We don't have weird propositions that are causing run-away housing prices. There aren't bar codes on our 3d printers, or cookie banner requirements on every website. Well, ok we do, but that nonsense all came in from other places.

We did pass laws to lower PFAS/PFOAS. That seems reasonable. Government can work.


MA legislature is too busy enriching themselves with back room dealing to f the state up too much.

I wish I was joking. They get audited yet? Pretty sure that was a ballot measure that passed by a huge margin years back and last I checked they were stalling...


> We don't have weird propositions that causing run-away housing prices.

Most of those are a reaction rather than the cause. People want to move to california, it creates a different set of problems for california vs Massachusetts


I like MA, but you realize the challenges are vastly different, right?

The sheer size, economic volume and cultural diversity of CA presents a pretty unique set of issues.


I mean, sure, but all those things I named don't seem to be scale induced? They seem to all stem from clueless regulation, which is as simple as not not signing silly laws? I'm missing where scale plays into the items I mentioned.


What if "you" are a pattern of linear algebra at the core?


I do not believe I am a pattern of linear algebra. I believe like the majority of humanity historically that I have a soul, a spiritual and non-physical reality, my personhood comes from my soul, and that as such, AI is fundamentally incapable of consciousness.

I also believe, as a result, it will be great fun watching researchers burn the next 30 years trying to understand what is missing. We’re going to find out very soon if the soul is real, when for all our progress we can’t create one.

Only those completely embedded in materialism need fear a conscious AI.


> I believe like the majority of humanity historically that I have a soul

It seems that your position is that the frequency of a belief across human history determines truth?

For large swaths of recorded history, earth was considered the center of the solar system. Given your reasoning, I should expect that is a belief you hold?

Is it possible that popularity of an idea is not a good measure for factuality?


Interesting that you label someone with a belief different than yours as delusional and whose views on the matter should not be respected (I’m assuming that’s what you meant by “feelings”).

> I believe like the majority of humanity historically that

Historically, lots of humans believed in lots of things that turned out not to be true. Believing something doesn’t make it true, as I’m sure you are aware, given your “those people are delusional” comment.

For what it’s worth, I’m not suggesting LLMs are or aren’t conscious. What I know is that the hard problem of consciousness is still very much not resolved, and when I asked the parent question my hope was that those that strongly believe LLMs are not conscious would educate me on the topic by presenting the basis for their reasoning.


I push back on the framing that this is just "a different belief." Every metaphysical framework except strict materialism rules out AI consciousness. Dualism, idealism, most forms of panpsychism, every major religious tradition. Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

When someone tells me linear algebra might have feelings, I don't think "delusional" is unfair. I think it's the natural response to a claim that only works if you've already accepted the one framework that can't account for the very thing it's trying to explain.


> Every metaphysical framework except strict materialism rules out AI consciousness

As I understand it, this is a very broad, and ultimately false claim. Panpsychism is definitely compatible with the idea of AI consciousness, as is functionalism, neutral monism, and others. Even some forms of idealism make AI consciousness metaphysically possible, since reality is fundamentally mental and the biological/artificial distinction is not ontologically basic (whether AI systems instantiate genuine centers of experience depends on the specific theory of subject formation within that idealist framework).


> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

Being an outlier doesn't make it wrong.

> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.

It's a pattern. The same way letters arise out of pixels on your screen.

From the screen's perspective, there are no letters, only pixels. It doesn't mean there is a "pixel soul."


I'd [redacted] myself then, probably.


Nice! Can it open multiple files at a time?


Not yet! I use cmd+w for closing the window and cmd+q to quit, I try to keep focus on one file at a time. If enough folks ask for it, I'll add that in :).


What did you use to record the video on the home page, if you don't mind me asking? I need to do something similar. One tip I've seen is to record at a higher resolution than you need, then scale down. The demo is good, but looks a little grainy at points, FYI.


Of course! I have looked into many tools to be able to do that properly. This one was done with Screen Studio, which is one, and there is another one that is nice and open source: Cap - but a bit less features.


I'm not the OP.

Not everyone is paying for LLMs, even now. So I think it is perfectly reasonable to assume good intentions, here.

Someone spent their own tokens to ponder your code and thought they'd share the result. For anyone else looking, like me, I can see that this is probably going to come up relatively clean without having to spend my own tokens, or install it, and I'm more likely to, now that I can see that.


Turns out it was bad intentions, I respect the optimism though. And thanks for taking a look!


Sorcery - open source app and protocol that, together, let you share source code links that open in each user's favorite editor, right on the linked line.

Supports VS Code, Neovim, IntelliJ/JetBrains Family, Zed, etc.

About to do the first beta release this later this week.

The protocol is "srcuri" (pronounced, "Sorcery")

This site is: https://srcuri.com/

Source code: https://github.com/browserup/sorcery-desktop


I took a look--it seems like you can pass a path on the command-line to open to. Can you pass a line number, also?


No, but that's a good idea, I'll add that


Also--cool editor!


Done, you can pass now file, or file:line, or file:line:column in the cli


> the tech is real and has great promise.

This was very true of the dotcom bubble. The entire "web" was new, and the promise was everything you use it for today.

Pets.com was a laughing stock for years as an example of dotcom excess, and now we have chewy.com, successfully running the same model.

Webvan.com, was a similar example of "excess" and now we have Instacart and others.

I looked up webvan just now--the postmortem seems relevant:

"Webvan failed due to a combination of overspending on infrastructure, rapid and unproven expansion, and an unsustainable business model that prioritized growth over profitability."


This to me is the whole bubble.

The problem of dotcom is we needed a cultural shift. I had my first internet date during the dot com bubble and I remember we would lie to people about how we met because the idea sounded so insane at the time to basically everyone. In 1999 it seemed kind of crazy to even use your real name online let alone put your credit card into the web browser.

Put your credit card into the internet browser then a stranger brings you items in their van? Completely insane culturally in 1999. It would have sounded like the start of an Unsolved Mysteries episode to the average person in 1999. There was no market for that in 1999.

The lesson I take from dotcom is we had this massive bubble and burst over technology that already existed, worked flawlessly and largely just needed time for the culture to adapt to it.

The main difference this time is we are pricing in technology that doesn't actually exist.

I can't think of another bubble that was based on something that doesn't exist. The closest analogy I can think of is the railroad bubble but with the trains not actually existing outside of some vague theoretical idea that we don't actually know how to build. A bubble in laying down rail because of how big it will be when we figure out how to build the trains.

The only way you would get a bubble that stupid would be to have 50-100 years of art, stories and movies priming the entire population on the inevitability of the train.


Uber might be the wildest cultural shift of the last 25 years.

Nobody blinks twice nowadays at getting into a car with a total stranger.


I don't get it. Nobody blinked twice about getting into a car with a total stranger before Uber either — taxis have been around for well over a hundred years. It's not exactly a huge cultural change, just more efficient and convenient.


Isn't openAI already profitable on inference?

I understand training is still costly, but it's not unimaginable for it to turn profitable as well if you think believe they'll generate trillions in value by eliminating millions of jobs.


If you eliminate ONE job and let's say the job pays $100K, in theory at most $100K goes instead to AI revenue. In practice it's a lot less, nobody is going to move everything to AI if it's just a 10% saving.

So, to get a trillion in value, you'd have to eliminate many tens or even hundreds of millions of jobs.


Yeah, I think high tens of millions of jobs would be eliminated. Most employers are seat warmers anyways.


No, inference is actually pointing to them being economically unviable.

https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e...


>Isn't openAI already profitable on inference?

I don't believe this has been the case or claim at all. At best they have recognized some limited use cases in certain models where API tokens have generated a gross profit.


They won’t generate trillions because there are several companies all competing and will undercut each other to win users.


> Isn't openAI already profitable on inference?

Probably not, but the numbers they've released are too opaque to tell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: