I'll take a shot. I think what the OP was alluding to is a modern movement towards "metaphysics" or manifestation.
Think about the world you see and live in. Someone created the monitor you're looking at that you're reading my comment. Someone created the keyboard you're working on. Someone created the machines to manufacture these things. And so on.
It started with a concept, an idea. It didn't just appear. We, as humans, have the ability to collapse (hint hint, quantum physics) from the ethereal to the physical world the thoughts we have. This applies to everything. What our decisions are. Should I eat McDonalds today or have fish and salad?
We are, in a sense, wizards in this world. We create what we focus our mind on. Where we direct our focus and our intent we can see desired outcome. If your desire is to make a billion dollars, no one is stopping you. You are the only obstacle.
What I'm saying won't resonate with a lot of people, but in my experience, this message isn't for everyone. I'm a software engineer who has learned to appreciate the spiritual world as much as I appreciate the science. The two can live in harmony (as it used to, read about Tesla and Newton's metaphysical works -- they were manifestors as well).
This was a little all over the place, but it's meant to be a sampler platter of metaphysical ideas.
> We, as humans, have the ability to collapse (hint hint, quantum physics) from the ethereal to the physical world the thoughts we have.
Do thoughts exist in an ethereal world, or are they just arrangements of chemicals and charges in the brain? I've never seen "ether," and nobody's ever found a structure in the human body that interfaces with it. There are no structures causally implicated in quantum wave function collapse, either—the microtubule hypothesis is quite pseudoscientific, I'm afraid. "Do I have McDonalds today, or fish and salad" is a decision made at the cellular level, not the subatomic.
This feels like a very disenchanted worldview, but the missing mystery you're reaching for is phenomenology, not idealistic metaphysics. The evanescent world of thought encoded within the chemicals and charges of our brain has its own self-referential structure which pays dividends to direct experiential analysis, which this article does engage in.
Incidentally, metaphysics is a very broad branch of philosophy which encompasses both materialist and idealist conceptions of the world. You're talking specifically about manifestation/"the law of attraction," which was originally associated with the New Thought religious movement, although it's percolated out into broader pop culture through books like The Secret.
Appreciate your perspective friend. Correct me if I'm wrong or mischaracterizing, but it feels like you're looking for something concrete or absolute in what I'm saying. In my experience the only thing that feels "absolute" is that nothing is absolute.
The words I'm using are the best I currently have to describe ideas that have always existed. It's not like a new messiah or philosopher came about with this novelty. It's something innate to all who possess the creative mind. And this is the root of maybe what I'm talking about (I'm still a student to all of this); every human possesses the ability to create.
Is it chemical? Is it God? Is it Tinkerbell's magical dandruff sprinkling into my head? Maybe it's both chemical and God. Maybe all of the above. How it happens is still up for debate, sure. But let me segue for a moment.
If you follow the progress of AI (I'm assuming you must), there is an ongoing debate of AGI/Superintelligence. OpenAI, Google, et al are promising their abilities to invent new medicine or invent some new art form. They will be novelty generators. I feel quite skeptical of this.
Right now, LLMs are incapable of novelty -- ie, it can only compose existing ideas, it cannot invent some new genre of music or new style of art. If it appears new it's only because that's what it was taught and it's more remixing. And sure, there's argument to be made that remixing is a form of creativity. However, it is not the decider of what is creative or not. The human on the other end prompting it makes that decision. THAT is an act of creativity.
Again, arguments to be made that if all it takes is an observer and a set of criteria then that must mean the AI agent we designed to generate and select images for some marketing campaign must be sentient right?
Maybe. Maybe not. As far as I know, these models do not have an internal motivation. They don't spend time replying to other people on forums with their perspective for.. who knows what reason. And if they do, it's because they have a programmed directive to do so.
The human is the one with an internal universe that span the colorful spectrum of experiences that is referred to as "qualia". Our experiences shape us and the world that we know. Our decisions are based on these experiences. Of course, I'm not deluded that the reality of the world we live in doesn't have have constraints: hunger, loneliness, desire, etc. We needed primal instincts to survive.
But once those needs are met, who are you now? Just a series of chemical reactions? Repeating that survival loop? This is where the ethereal comes in.
> I've never seen "ether," and nobody's ever found a structure in the human body that interfaces with it.
Many humans have been interfacing with the "ether" for thousands of years. You interface with it when you practice creativity. Many musicians talk of how sometimes a song just appears to them. I'm sure you'll find ways of explaining this way, but in my opinion, there's a deeper mechanism that we're unaware of or aren't ready to know yet.
> ideas that have always existed. It's not like a new messiah or philosopher came about with this novelty
Self-help books about manifestation tend to nebulously describe the "law of attraction" as a principle that has always existed and which great people throughout history have understood, but the movement associated with it is a modern phenomenon. The Wikipedia page "Law of attraction (New Thought)" [1] is a good starting point, if you're curious.
> And if they do, it's because they have a programmed directive to do so.
Are we not programmed? Our brains were developed through evolution, not engineering, but we still eat when we are hungry. That's a directive that was embedded in us during the process of our development. Why should creativity have a supernatural component when the source of our behaviour, evolution, is anything but?
> The human is the one with an internal universe that span the colorful spectrum of experiences that is referred to as "qualia".
We certainly feel as if they are colourful, but we would, wouldn't we? They have to be, to fulfil their evolutionary purposes. Fear compels us to run and hide. If it didn't feel overwhelming and powerful, it wouldn't work. And if it didn't feel unique, then it would be redundant. Imagine if lust felt like fear: we would either flee reproduction or embrace danger.
Qualia are the abstractions of our senses. They feel present and vivid because they are the fabric of experience, but that doesn't imply anything beyond the physical. If we created an intelligent robot and programmed it to be compelled to flee when it detected danger, how do you think it would describe the experience of detecting danger and feeling its mind transform into a mode that compelled it to flee, that made staying still seem unbearable? Powerful, ineffable, invigorating, unpleasant? It would probably sound something like a person describing fear.
> Many musicians talk of how sometimes a song just appears to them.
Sure, but that's not magic. It's just an idea moving from the unconscious parts of the brain to the conscious. Why shouldn't the "deeper mechanism" simply be the parts of our minds that we are not consciously aware of?
I understand skepticism comes from having not experienced AI -- what has been your exposure?
There are a lot of people like myself who have been following every product/research that comes out watching and using the tools that are coming out at rapid pace.
At a certain point you feel the trajectory and the promise of AGI and ASI does not seem that far away. The conversation about AI isn't hype, it's a genuine concern that people are not taking seriously. This tech is bigger than the average mainstream newstream is making it out to be.
I built an entire MMO this past weekend with a friend using AI. If you are still one of those people going "it's not there yet" you are sleepy. And I get it, it's a lot. But I would strongly encourage you to see what's out there before scoffing.
I've been using Warp for the past few weeks and it's been incredibly impressive over other agentic coding services/platforms. Curious how Qodo stacks up.
When I tried warp I was convinced that was where the industry was going (agents as terminal replacement), but it felt a bit too heavy to me so I haven’t been using it lately. Still think all things will converge on terminal and browser replacement.
Depends on your perspective right? Are you trying to be the number one top of the thing you're doing? Respectable goals, but it takes A LOT of sacrifice to get there. Is it worth it? For some yes, for others no.
I _was_ of the competitive mindset years ago, but I'm finding life is a lot more pleasant if I don't force things and go with what is working and leave behind what doesn't. What I leave behind could include working a good job that pays well to pursue other things life has to offer.
CEOs know (or don't care) that they won't reach superintelligence. The reason they are where they are is that they are good at saying what they need to get the next round of funding.
CEOs know that superintelligence is the only way their unbelievable investments will pay off, and they probably also know that they won't (if they don't know, they certainly don't care). But what's important is convincing investors that a mature industry (IT) is still in an exponential growth period, and they are absolutely willing to hype anything that can do this, up until the point it can't. Thus blockchain and all its applications, now generative AI.
Yes. As well as other hype-men and useful idiots. People are extremely naive when it comes to vested interest talking points. There’s a very natural reason why these guys want to keep talking about AGI and ASI: ”soon” is the magic word that makes investors feel fomo and make rash decisions.
During peak crypto madness vagueposting was an extremely effective market manipulation tool. I know people who made a lot of money on unconfirmed rumors in hours but of course it was just zero-sum gambling - the ”early adopters” made their money at the expense of the latecomers. No value was generated.
People don’t even need to be convinced that AGI/ASI is near, just ”but what if there’s a chance?”. It’s similar psychological tricks as selling lottery tickets.
With all the talk of "LLMs aren't there yet" and the latest buzzword "context engineering" I thought now would be a good a time as any to show this tool I've been working on.
Problem:
Bad specs and laziness can get in the way of both the AI coding agent and the developer being in sync on what to build.
Solution:
SpecLinter aims to manage your codebase's context by understanding your architecture and producing testable features by leveraging Gherkin.
IMO, the most powerful piece of this whole AI development workflow is describing what you want using Gherkin's Behavioral Driven Design system.
Please give it a shot and let me know if SpecLinter helps bridge any of the gaps in AI development for you. Open to feedback!