I'm a programmer. I periodically need to make a tiny tweak in a file that's been created by a real artist, or I want to edit a photo I took, or whatever.
It's insane to spend $1500, or even $500 (the CorelDraw buy-it-outright price) for hobby and occasional-use software like that.
And yeah, I use other things like Affinity Photo, which is Good Enough for many of my purposes, but it's just annoying to not be able to use the same software as my artists--unless they flatten the image before giving it to me, it's a crap-shoot whether I can import it in anything but the exact version of PhotoShop they were using.
It feels like extortion: I have to pay the artist to make the tiniest changes because I can't edit the original file, or I have to pay Adobe an outrageous sum to do it myself. Lose-lose.
Fully understood, this carefully engineered vendor lock-in is the cherry on the cake. It's in all CAD software for no reason and forces you to follow the herd. Open standards should be imposed by state actors...
If you're paying artists to make art in PS, are you not doing it for something you make money off of? Or are you just really deep in the hobby that you're nearing professional level?
Photoshop was never $1500 either. CS6 was $700. The design standard CS6 suite was $1300.
Maybe hunt for artists that use the reasonably priced Clip Studio Paint instead? It's pretty popular among manga and the like artists anyways.
> I have to pay the artist to make the tiniest changes because I can't edit the original file
Hire the artist and ask them for the files exported into a format you can open. If they refuse, hire somebody else.
I do agree with the sibling that open standards should be set by state actors. But they should only make them available, not mandate them into private actors.
> ask them for the files exported into a format you can open
They already do that. That's not the problem.
If the original Photoshop file has 200 layers, and 60 of those layers have effects that use advanced Photoshop-only features, then no other art program can open the source material. Period.
At best you can get approximations of the original Photoshop render if you open the image in another program. But generally what you get is garbage if it's not a recent version of Photoshop.
The point of getting the Photoshop original with the layers is that I might be able to make a tweak to one of the layers and have it re-render a result that is better for what I need. Something that is difficult or impossible if I just have a JPEG.
And asking the artist to do the work in a program that doesn't have all of those features is roughly equivalent to asking a software engineer to use Mac/Windows/Linux (pick one they don't know) and to write all of the code in Visual Basic/Perl/PHP/JavaScript/C/C++/COBOL (pick one they don't know). Yes, technically anything is possible in any environment, but it might take 10x as long and be 100x as painful--with a result that may not be as good due to the tools not being as good.
Artists are professionals with an acquired skill set. You can't ask them to work using unfamiliar tools and expect them to be happy or productive.
> And asking the artist to do the work in a program that doesn't have all of those features is roughly equivalent to asking a software engineer to use Mac/Windows/Linux (pick one they don't know) and to write all of the code in Visual Basic/Perl/PHP/JavaScript/C/C++/COBOL (pick one they don't know).
You mean the thing that every single company does for their work for hire?
When a developer doesn't know, they go after another developer. (And they should restrict the number of constraints to what is really important, but almost no company does that.)
No company I've worked for in the past decade has told me what kind of computer I should work on. Even the W2 gigs have allowed me my choice of Mac/Linux/Windows. I work for tech-savvy companies, though. I'm sure there are tech-naive companies that force everyone to work on Mac or whatever.
And companies that want programmers who write, say, Delphi or Visual Basic, are going to be getting crap developers, and would be better off porting their software to something more modern. I did some work on a Delphi project to help out a friend, and no, I wouldn't go to work for a company to work on Delphi full-time. They couldn't possibly pay me enough.
But that's my point: Just like they would get crap developers, I would get crap artists. Or extremely expensive artists. Not interested. It would literally be cheaper to pay Adobe the extortion they ask than to try to work with non-Adobe artists.
- Paint.net is free and covers most of what I'd need to do
- GIMP is free. Cumbersome, but if I need to do any batch operations that's when I bring out a full suite.
If I only need to do a quick edit for some hobby thing, I'm not frought for options.
>but it's just annoying to not be able to use the same software as my artists
So you are a professional? If you have artists at your beck and call and it's not a forboding deadline, I don't know why you wouldn't ask the artist to make the edit.
There's definitely a debate to be had about proprietary file formats (I work in games, so I completely understand that with its 3d equivalent that is the FBX format... thankfully there are very slow moves to cast that away), but I'm not sure I have a good solution. I don't necessarily think a company should be forced to open source/spec its own tooling.
> I don't know why you wouldn't ask the artist to make the edit.
Have you ... worked with artists? To get them to produce technically precise artwork?
The point would be that sometimes it takes 4-5 turnarounds with an artist to get something exactly right. Something that I, as a non-artist but skilled app user, can do in less time it takes to explain what I need to the artist a single time. So it's about saving my time and not having to pay for hours of artist time for something I can do in 10 minutes.
What I'd like to see is tiered licenses. They're being greedy and I refuse to patronize them. That's what it comes down to. I'm not saying they should be forced to do anything. Just that I don't like what they're doing, and therefore end up having to work around their software rather than using it.
I have a license for the last one they offered for a fixed cost; bought it for a steep discount when the new licenses were the Next Big Thing. But they won't get any more of my money until they offer the software at a reasonable price tier.
>Have you ... worked with artists? To get them to produce technically precise artwork?
Yes. But I work in games, so maybe I was expecting professional artists working on complex assets and not a grab bag from fiverr for some UI art. Anything "simple" probably takes them 2-5 minutes and maybe a few turnarouns while I could maybe take an hour of edits for much worse quality.
>What I'd like to see is tiered licenses. They're being greedy and I refuse to patronize them.
I agree completely. But I know there's no such thing as a smooth migration, especially when working as a team.
It's sad, but they have a lock on the market for a reason and that moral stance won't be without some growing pains or compromises. I'm sure we both know trying to get an artist to migrate tools is much harder than a programmer.
Well, I think you could say I've worked in games too. [1]
In fact, it's in games that the artists, especially when working with 3d, had the hardest time getting the precise kinds of changes that I would need.
But even in 2d, if they, say, created a sprite, but then left a few pixels non-100%-transparent in the corners of the image, I could ask them to go find those pixels and erase them...or I could do it myself.
And if they don't get them completely erased, then there will still be artifacts on the screen and the texture atlas packing will be screwed up.
Yeah. I've been doing this for a long time.
And no, I don't have much hope of getting artists to migrate. I'm just tilting at windmills.
CorelDraw is great, but for years they were also subscription-only. In the last six months or so they finally started offering a single-price license again--at a prohibitive level.
I bought the previous single-price version years ago, and it's so stale that I prefer to use Inkscape, despite the more limited feature set, and I've been using the Affinity suite as a more professional replacement.
Now it looks like they let you buy it again, but at $550, I'm still giving them the finger. Their upgrade price used to be ~$200; I would pay that once ever 3-4 years or so, and consider that a reasonable expense to get a good product and have it available when I did need it. But for $550, I'd need to be planning on keeping it for something like a decade to get a similar value--and it's too much to justify buying at my limited usage level.
All of these subscription services should get over themselves and allow you to rent them for occasional usage for a reasonable amount of money. If I could give them $20 for intermittent (time-limited? operation-limited?) use, with no "auto-renewal", I might do that every time I actually needed the product.
But no, they need to be greedy and demand that you pay for a year of usage in advance (or by using deceptive practices like Adobe above).
I've used Paint Shop Pro, and I really don't like it. I can use Corel PhotoPaint and Affinity Photo, and they're fine, but PSP makes me crazy when I try to use it. I'd almost rather use Gimp.
Fair enough. I've never paid full price for any Corel product. They're frequently on Humble Bundle where you get a bunch of them on the order of like $30 total. It looks like right now there's even a sale going on: https://www.humblebundle.com/software/corel-productivity-cre...
My CorelDraw license is for 2020, so not super up to date, but I've generally liked it. I've not tried the Essentials package.
I'm stuck with CorelDraw X8 which dates to 2016. If they were selling a buy-it-once license in 2020, I wasn't aware of it. I swear they had switched to subscription-only by then? But maybe it happened that year and I missed the last opportunity to buy a permanent license.
Last time I looked at Essentials, it looked to me like they had hamstrung it too much. I don't remember the specific restrictions they put on it, but I didn't want what they were selling. Might be worth another look with the Humble Bundle though.
LLMs are good at tasks that don't require actual understanding of the topic.
They can come up with excellent (or excellent-looking-but-wrong) answers to any question that their training corpus covers. In a gross oversimplification, the "reasoning" they do is really just parroting a weighted average (with randomness injected) of the matching training data.
What they're doing doesn't really match any definition of "understanding." An LLM (and any current AI) doesn't "understand" anything; it's effectively no more than a really big, really complicated spreadsheet. And no matter how complicated a spreadsheet gets, it's never going to understand anything.
Not until we find the secret to actual learning. And increasingly it looks like actual learning probably relies on some of the quantum phenomena that are known to be present in the brain.
We may not even have the science yet to understand how the brain learns. But I have become convinced that we're not going to find a way for digital-logic-based computers to bridge that gap.
This is also why image generating models struggle to correctly draw highly variable objects like limbs and digits.
They’ll be able to produce infinite good looking cardboard boxes, because those are simple enough to be represented reasonably well with averages of training data. Limbs and digits on the other hand have nearly limitless different configurations and as such require an actual understanding (along with basic principles such as foreshortening and kinetics) to be able to draw well without human guidance.
I would just add that I think I have encountered situations that knowing the weighted average answer from the training data for topics I didn't previously understand created better initial conditions for MY learning of the topic than not knowing the weighted average answer.
The problem to me is we are holding LLMs to a standard of usefulness from science fiction and not reality.
A new, giant set of encyclopedias has enormous utility but we wouldn't hold it against the encyclopedias that they aren't doing the thinking for us or 100% omniscient.
Please show me where the training data exists in the model to perform this lookup operation you’re supposing. If it’s that easy I’m sure you could reimplement it with a simple vector database.
Your last two paragraphs are just dualism in disguise.
I'm far from being an expert on AI models, but it seems you lack the basic understanding of how these models work. They transform data EXACTLY like spreadsheets do. You can implement those models in Excel, assuming there's no row or column limit (or that it's high enough) - of course it will be much slower than the real implementations, but OP is right - LLMs are basically spreadsheets.
Question is, wouldn't a brain qualify as a spreadsheet, do we know it can't be implemented as one? Well, maybe not, I'm not an expert on spreadsheets either, but I think spreadsheets don't allow you circular references, and brain does, you can have feedback loops in the brain. So even if the brain doesn't have something still not understood by us, that OP suggests, it still is more powerful than AI.
BTW, this is one explanation on why AI fails at some tasks: ask AI if two words rhyme and it will be quite reliable on that. But ask it to give you word pairs that rhyme, and it will fail, because it won't run an internal loop trying some words and checking if they succeed to rhyme or not. If some AI actually succeeds at rhyming, it would do so either because it's trained to contain such word pairs from the get-go or because it's implemented to have multiple passes or something...
You can implement Doom in a spreadsheet too, so what? That wasn’t the point op or I were making. If you bother to read the sentence before op talks about spreadsheets they are making the conjecture that LLMs are lookup tables operating on the corpus they were trained on. That is the aspect of spreadsheets they were comparing them to, not the fact that spreadsheets can be used to implement anything that any other programming language can. Might as well say they are basically just arrays with some functions in between, yeah no shit.
Which LLMs can’t produce rhyming pairs? Both the current ChatGPT 3.5 and 4 seem to be able to generate as many as I ask for. Was this a failure mode at some point?
> Which LLMs can’t produce rhyming pairs? Both the current ChatGPT 3.5 and 4 seem to be able to generate as many as I ask for
Only in english. If they would understand language and rhymes they would do it in every other language it knows, It can't in my language while it can speak in it fluently. It just fails. And fails in so many other areas, I'm using LLMs daily for work and other stuff and if you use them long enough you will see that they are statistical machines not intelligent entities.
People are confusing the limited computational model of a transformer with the "Chinese room argument", which leads to unproductive simultaneous debates of computational theory and philosophy.
I'm not confusing anything. I'm familiar with the Chinese Room Argument and I know how LLMs work.
What I'm saying is arguably philosophically related, in that I'm saying the LLM's model is analogous to the "response book" in the room. It doesn't matter how big the book is; if the book never changes, then no learning can happen. If no learning can happen, then understanding, a process that necessarily involves active reflection on a topic, can exist.
You simply can't say a book "understands" anything. To understand is to contemplate and mentally model a topic to the point where you can simulate it, at least at a high level. It's dynamic.
An LLM is static. It can simulate a dynamic response by having multiple stages that dig through an multiple insanely large books of instructions that cross reference each other and that involve calculations and bookmarks and such to come up with a result--but the books never change as part of the conversation.
Transformer is not a simple vector database doing simple lookup operation. It's doing lookup operation on a pattern, not a word. It learns patterns from the dataset. If your pattern is not there it will hallucinate or give you the wrong answer like GPT4 and Opus gave me hundreds of times already.
> LLMs can form new memories dynamically. Just pop some new data into the context.
No, that's an illusion.
The LLM itself is static. The recurrent connections form a soft-of temporary memory that doesn't affect the learned behavior of the network at all.
I don't get why people who don't understand what's happening keep arguing that AIs are some sci-fi interpretation of AI. They're not. At least not yet.
It isn't temporary if you keep it permanently in context (or in a RAG store) and pass it into every model call, which is how long-term memory is being implemented both in research and in practice. And yes it obviously does affect the learned behavior. The distinction you're making between training and context is arbitrary.
Endless ink has been spilled on the most banal and useless things. Deconstructing ice cream and physical beauty from a Marxist-feminist race-conscious postmodern perspective.
Every single discussion of ‘AGI’ has endless comments exactly like this. Whatever criticism is made of an attempt to produce a reasoning machine, there’s always inevitably someone who says ‘but that’s just what our brains do, duhhh… stop trying to feel special’.
It’s boring, and it’s also completely content-free. This particular instance doesn’t even make sense: how can it be exactly the same, yet more sophisticated?
The problem is that we currently lack good definitions for crucial words such as "understanding" and we don't know how brains work, so that nobody can objectively tell whether a spreadsheet "understands" anything better than our brains. That makes these kinds of discussions quite unproductive.
I can’t define ‘understanding’ but I can certainly identify a lack of it when I see it. And LLM chatbots absolutely do not show signs of understanding. They do fine at reproducing and remixing things they’ve ‘seen’ millions of times before, but try asking them technical questions that involve logical deduction or an actual ability to do on-the-spot ‘thinking’ about new ideas. They fail miserably. ChatGPT is a smooth-talking swindler.
I suspect those who can’t see this either
(a) are software engineers amazed that a chatbot can write code, despite it having been trained on an unimaginably massive (morally ambiguously procured) dataset that probably already contains something close to the boilerplate you want anyway
(b) don’t have the sufficient level of technical knowledge to ask probing enough questions to betray the weaknesses. That is, anything you might ask is either so open-ended that almost anything coherent will look like a valid answer (this is most questions you could ask, outside of seriously technical fields) or has already been asked countless times before and is explicitly part of the training data.
Your understanding of how LLMs work isn’t at all accurate. There’s a valid debate to be had here, but it requires that both sides have a basic understanding of the subject matter.
How is it not accurate? I haven’t said anything about the internal workings of an LLM — just what it able to produce (which is based on observation).
I have more than a basic understanding of the subject matter (neural networks; specifically transformers, etc.). It’s actually not a hugely technical field.
By the way, it appears that you are in category (a).
As the comment I replied to very correctly said, we don’t know how the brain produces cognition. So you certainly cannot discard the hypothesis that it works through “parroting” a weighted average of training data just as LLMs are alleged to do.
Considering that LLMs with a much smaller number of neurons than the brain are in many cases producing human-level output, there is some evidence, if circumstantial, that our brains may be doing something similar.
Copyright covers "derivative works." Verbatim is absolutely not a requirement for infringement.
If you take a copyrighted image and modify it, even to the point where it's unrecognizable, if the image is being used in the same way (i.e., isn't a "transformative use"), then it's still a derivative work.
Yes, you are likely to get away with it if you're not caught. But that doesn't mean what you're doing is considered fair use, just that you won't get sued.
Thing is, every piece of text generated by ChatGPT is incrementally using every character of training data. So legally speaking, everything it produces is arguably a derivative work of ALL of the training data.
Generative AI isn't even a legal gray area; under current law, there's no blanket exception for "how much" of a copyrighted work is used. At best there's a fair use _guideline_ that lists, as one of four criteria, the amount and nature of the copyrighted work used. But really it's the entirety of millions of copyrighted works being used to generate the models, and those works _can_ be reproduced verbatim in many cases, proving that the works are encoded into the model.
Generative AI is only permitted because there's big money behind it along with associated lobbyists. And there are many in-flight lawsuits trying to shut down both GPT and various art-generating AIs.
Maybe they'll change the law. Maybe courts will side with the AI companies. But until then, it seems obvious to me that anyone arguing that generative AI based on models built with copyrighted works is completely legal is using motivated reasoning.
I understand OpenAI is a US company, but this is a US-centric view. This is especially since TFA is about a Brazillian operation.
> under current law, there's no blanket exception for "how much" of a copyrighted work is used
Under fair dealing laws, there are. [1] Though, as always, if commercial fan art is legal, then so should something that uses only a couple bytes of information per work, bar overfits.
> But until then, it seems obvious to me that anyone arguing that generative AI based on models built with copyrighted works is completely legal is using motivated reasoning.
It is completely legal in the EU, Japan, South Korea and Singapore. [2]
Your link re: Fair Dealing guidelines does NOT make it 100% legal. For one, the ENTIRE works are encoded into the model--not a part of them. For another, those are just guidelines, not explicit exceptions, just like Fair Use in the US. It's all very hand-wavy, even more so in the UK, apparently, so there's no way you can list those guidelines and say that anything is clearly allowed.
Your second link means it's legal for them to CREATE THE MODEL. This is true in the US as well: The model is a clearly transformative use of the data.
But as soon as the model produces works in the same use category as the original work (code -> model -> code, for instance, or image -> model -> image), it is no longer transformative.
If you understand the law and the technology, it's clearly generating derivative works.
Entire works are encoded in the model in the same way that if I cut up a document into individual words and put it in a bag with a bunch of other documents, if I was a no life loser I could spend a long time "recreating" the document from individual words. The bag of cutout words is NOT copyright violation though.
> How could they prevent the framework become over bloated with semi baked plug-in?
...not sure how they plan to, but how they COULD do it is by making it easy enough to directly access native resources directly from the script language (like NativeScript) or by making it so easy to write native code (Kotlin/Swift are listed as first-class options) that you just write any specific API access code in the appropriate native language.
It doesn't give you the Electron "write once run everywhere" experience, since you need to write some of the code per-platform, but many apps are 95% UI and only 5% platform-specific functionality. So by abstracting the UI by having it be HTML/CSS/JavaScript, you're getting a "write once run everywhere UI" and the minority of the code that needs to differ is all you have to maintain per-platform.
If writing a plug-in is a high bar, then you get tons of semi-baked plug-ins as the (seemingly) only way to access native features. If instead you can drop in native code easily and quickly, then you can focus on app development and cut out the middleware. ;)
This reminds me of the "Parable of the Two Programmers." [1] A story about what happens to a brilliant developer given an identical task to a mediocre developer.
I had an idea once but when I tried to explain it people didn't understand.
I revisited earlier thought: communication is a 2 man job, one is to not make an effort to understand while the other explains things poorly. It always manages to never work out.
Periodically I thought about the puzzle and was eventually able to explain it such that people thought it was brilliant ~ tho much to complex to execute.
I thought about it some more, years went by and I eventually managed to make it easy to understand. The response: "If it was that simple someone else would have thought of it." I still find it hilarious decades later.
It pops to mind often when I rewrite some code and it goes from almost unreadable to something simple and elegant. Ah, this must be how someone else would have done it!
“Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”
― Abraham Lincoln
I have started to follow this 'lately' (for a decade) and it has worked miracles. As for the anxious managers/clients, I keep them updated of the design/documentation/though process, mentioning the risks of the path-not-taken, and that maintain their peace of mind. But this depends heavily on the client and the managers.
I can't seem to find it in a google search, maybe I'm just recalling entirely the wrong terms.
In the early computing era there was a competition. Something like take some input and produce an output. One programmer made a large program in (IIRC) Fortran with complex specifications documentation etc. The other used shell pipes, sort, and a small handful or two of other programs in a pipeline to accomplish the same task in like 10 developer min.
"""“And who better understands the Unix-nature?” Master Foo asked. “Is it he who writes the ten thousand lines, or he who, perceiving the emptiness of the task, gains merit by not coding?”"""
This is the competition I was thinking of. I must have read it in a dead-image PDF version some other time on HN. This paper isn't the one I recall but the solution is exactly the sort I vaguely recalled.
I'm trying to copy-in the program as it might have existed, with some obvious updates to work in today's shells ...
[For those who may not follow all the links: Bentley asked Knuth to write a program in Pascal (WEB) to illustrate literate programming—i.e. explaining a long complicated program—and so Knuth wrote a beautiful program with a custom data structure (hash-packed tries). Bentley then asked McIlroy to review the program. In the second half of the review, McIlroy (the inventor of Unix pipes) questioned the problem itself (the idea of writing a program for scratch), and used the opportunity to evangelize Unix and Unix pipes (at the time not widely known or available).]
I was both of those developers at different times, at least metaphorically.
I drank from the OO koolaid at one point. I was really into building things up using OOD and creating extensible, flexible code to accomplish everything.
And when I showed some code I'd written to my brother, he (rightly) scoffed and said that should have been 2-3 lines of shell script.
And I was enlightened. ;)
Like, I seriously rebuilt my programming philosophy practically from the ground up after that one comment. It's cool having a really smart brother, even if he's younger than me. :)
This is unrelated to the excellent story, but it's annoying that the repost has the following "correction":
> The manager of Charles has by now [become] tired of seeing him goof off.
"The manager has tired of Charles" is as correct as "the manager has become tired of Charles". To tire is a verb. The square bracket correction is unnecessary and arguably makes the sentence worse.
Without more backup I can only describe that as being fiction. Righteous fiction, where the good guy gets downtrodden and the bad guy wins to fuel the reader's resentment.
Sometimes I'm appreciated, and managers actually realize what they have when I create something for them. Frequently I accomplish borderline miracles and a manager will look at me and say, "OK, what about this other thing?"
My first job out of college, I was working for a company run by a guy who said to me, "Programmers are a dime a dozen."
He also said to me, after I quit, after his client refused to give him any more work unless he guaranteed that I was the lead developer on it, "I can't believe you quit." I simply shrugged and thought, "Maybe you shouldn't have treated me like crap, including not even matching the other offer I got."
I've also made quite a lot of money "Rescuing Small Companies From Code Disasters. (TM)" ;) Yes, that's my catch phrase. So I've seen the messes that teams often create.
The "incompetent" team code description in the story is practically prescient. I've seen the results of exactly that kind of management and team a dozen times. Things that, given the same project description, I could have created in 1/100 the code and with much more overall flexibility. I've literally thrown out entire projects like that and replaced them with the much smaller, tighter, and faster code that does more than the original project.
So all I can say is: Find better teams to work with if you think this is fiction. This resonates with me because it contains industry Truth.
To me it is a story about managers clueless about the work. You can make all the effort in the world to imagine doing something but the taste of the soup is in the eating. I do very simple physical grunt work for a living, there it is much more obvious that it is impossible. It's truly hilarious.
They probably deserve more praise when they do guess correctly but would anyone really know when it happens?
Yes: Programmers who start at twelve are often the 10x programmers who can really program faster than the average developer by a lot.
No: It's not because they have 10 more years of experience. Read "The Mythical Man Month." That's the book that popularized the concept that some developers were 5-25x faster than others. One of the takeaways was that the speed of a developer was not correlated with experience. At all.
That said, the kind of person who can learn programming at 12 might just be the kind of person who is really good at programming.
I started learning programming concepts at 11-12. I'm not the best programmer I know, but when I started out in the industry at 22 I was working with developers with 10+ years of (real) experience on me...and I was able to come in and improve on their code to an extreme degree. I was completing my projects faster than other senior developers. With less than two years of experience in the industry I was promoted to "senior" developer and put on a project as lead (and sole) developer and my project was the only one to be completed on time, and with no defects. (This is video game industry, so it wasn't exactly a super-simple project; at the time this meant games written 100% in assembly language with all kinds of memory and performance constraints, and a single bug meant Nintendo would reject the image and make you fix the problem. We got our cartridge approved the first time through.)
Some programmers are just faster and more intuitive with programming than others. This shouldn't be a surprise. Some writers are better and faster than others. Some artists are better and faster than others. Some architects are better and faster than others. Some product designers are better and faster than others. It's not all about the number of hours of practice in any of these cases; yes, the best in a field often practices an insane amount. But the very top in each field, despite having similar numbers of hours of practice and experience, can vary in skill by an insane amount. Even some of the best in each field are vastly different in speed: You can have an artist who takes years to paint a single painting, and another who does several per week, but of similar ultimate quality. Humans have different aptitudes. This shouldn't even be controversial.
I do wonder if the "learned programming at 12" has anything to do with it: Most people will only ever be able to speak a language as fluently as a native speaker if they learn it before they're about 13-14 years old. After that the brain (again, for most people; this isn't universal) apparently becomes less flexible. In MRI studies they can actually detect differences between the parts of the brain used to learn a foreign language as an adult vs. as a tween or early teen. So there's a chance that early exposure to the right concepts actually reshapes the brain. But that's just conjecture mixed with my intuition of the situation: When I observe "normal" developers program, it really feels like I'm a native speaker and they're trying to convert between an alien way of thinking about a problem into a foreign language they're not that familiar with.
AND...there may not be a need to explicitly PROGRAM before you're 15 to be good at it as an adult. There are video games that exercise similar brain regions that could substitute for actual programming experience. AND I may be 100% wrong. Would be good for someone to fund some studies.
That childhood native-fluency analogy is insightful! Your experience matches mine.
I started programming at age 7 and it's true that the way code forms in my head feels similar to the way words form when I'm writing or speaking in English. In the same way that I don't stop and consciously figure out whether to use the past or present tense while I'm talking, I usually don't consciously think about, say, what kind of looping construct I'm about to use; it's just the natural-feeling way to express the idea I'm trying to convey. The idea itself is kind of already in the form of mental code in the same way that my thoughts are kind of already in English if I'm speaking.
But... maybe that's how it is for everyone, even people who learned later? I only know how it is in my own head.
I totally get the same sense that I'm just "communicating" using code. I just write out the code that expresses the concepts I have in my head.
And at least some people clearly don't. I was talking to one guy who said that even for a simple for-each loop it was way faster for him to "Google the code he needs and modify it" than to write it. This boggled me. I couldn't imagine being able to Google and parse results and find the one I wanted and copy and paste it and modify it being faster than just writing the code.
Even famous developers brag about their inability to code. DHH (RoR developer) has a tweet where he brags that he couldn't code a bubble sort without Googling it. A nested loop with a single compare and swap...and he's "proud" of the fact that he needs to Google it?
The association with video games in your last paragraph makes a lot of sense to me. This is how I feel solving problems.
I always thought that people who start at 12 and keep at it are good because they really love it.I see people who struggle a lot with learning, and it's because they hate it but are doing it for other reasons.
> Most people will only ever be able to speak a language as fluently as a native speaker if they learn it before they're about 13-14 years old.
Very few people both have a ton of exposure to a language and actually study the grammar and stuff as adults.
If you don't learn the grammar you will still speak like a dog after living in a country for 20 years.
A lot of people in an average company don't write hard things at their job, didn't read any textbooks etc. and spend loads of time in meetings etc.
> Very few people both have a ton of exposure to a language and actually study the grammar and stuff as adults.
Very few people actually learn to speak a language as a native speaker by "studying the grammar."
I remember people trying to learn what was and what wasn't a run-on sentence in junior high school, and being shocked that they had a hard time telling the difference.
And studying language explicitly doesn't change the brain regions used to the same that are used by a native speaker.
And that's my point. I didn't really "study" programming explicitly as much as understanding it intuitively. When exposed to a new concept, I just immediately internalize it; I don't need to use it a bunch of times and intentionally practice it. I just need to see it and it's obvious and becomes part of my tool-set.
Honestly, most everything listed on the page as an advantage of Zig, is a disadvantage from my point of view.
I'm sure Zig has its use cases. For what I write, I not only don't care if there's a hidden function call or error handling, I see those as 100% necessary for a modern language.
Needing to handle errors inline is a huge mess for anything nontrivial. It distracts from the logic that's important at that point in the code. Being able to override an accessor to do something instead of being a raw access is incredibly useful; a tiny change and rebuild is all that's required to track information that you would otherwise need to rewrite an entire app to support.
If you're writing extremely low level code and libraries, especially embedded, then fine, minimizing hidden behavior is important. Being able to operate without a standard library is also important in that case. Outside of that niche, though, there are few places I'd call those "features" of Zig an advantage.
Yeah zig just doesn't offer enough advantages over "I'll just use c++ as a better c with raii, containers, safe string class, template functions, and very simple classes/powerful structs (no inheritance), and a threading standard library"
Another way to say "decide they're against the merger" is "evaluate the situation and make a timely ruling that they oppose the merger as illegal."
Which is exactly what they were supposed to do. Adobe and Figma tried to argue with the regulators or find a compromise, but couldn't come up with a solution that satisfied all parties.
If you try to extract subtle implications from the phrasing of a commenter on HN, you're likely to jump to the wrong conclusion.
But that's the point: it's not timely. It's a massive millstone to Figma, who now have to figure out what to do with their pixel-perfect UI designer collaboration tool in a world of an and coming Adobe Firefly.
The output of LLMs is ... rarely well-designed. Well-documented (with often incorrect documentation), well-formatted for sure, but profoundly not well-designed, unless you're asking for something so small that the design is trivial.
Even with GPT-4, if you ask it for anything interesting, it often produces code that not only won't work, but that couldn't possibly work without a major rewrite.
Not sure what you've been requesting if it's always been good output. Even when asking GPT-4 for docs I've had it hallucinate imaginary APIs and parameters more often than not.
Maybe the questions I ask are not as common? Given my experiences, though, I wouldn't recommend it to anyone for fear it gave them profoundly bad advice.
I’ve come to the conclusion that GPT produces code at a level of a new graduate at best. In actually getting it to solve something more or less on its own, it did ok on simple tasks and failed as soon as requirements became a bit more nuanced or specific. It’s also not very good at thinking out of the box, it’s solutions are all very clearly tied to its training data, meaning it struggled doing anything that strayed too far into the abstract or different.
However it’s been great at being my rubber duck and it’s been great as a tool for helping me eg write complex SQL queries — never without me being a key part of the loop, but as a tool to help me fill in gaps in my own skills or understanding. That is, it amplified my abilities. It was also pretty good at creating interesting metaphors for existing concepts, explaining terminology and even explaining bits of code I gave it.
My experience as well. Heavy GPT-4 use (for a variety of things). Great for boilerplate, great for retrieving well-known examples from documentation, saves a fair amount of time typing and googling, but often completely wrong (majorly and subtly) and anything non-trivial I have to do myself.
Great tool! Saves a ton of time! Not a dev replacement (yet)
Now I am definitely doing much simpler things I would wager than you folks are but I have found that with a bit of back and forth you can get pretty good results that work with only a bit of revision. I have found reminding it of the purpose or goals of whatever it is youre working on at the moment tends to make the output a bit more consistent
> with a bit of back and forth you can get pretty good results that work with only a bit of revision
The problem for me is that the "back and forth" and "a bit of revision" steps very often end up taking more time than writing the code myself would have.
Thats because you actually know what youre doing haha.
In all seriousness I am not a software engineer and GPT has enabled me to build things in a couple weeks that would have taken me months of effort to create otherwise.
I am sure an actual software engineer could have made those same tools in a day or two but its still incredible for my use case.
I'm a programmer. I periodically need to make a tiny tweak in a file that's been created by a real artist, or I want to edit a photo I took, or whatever.
It's insane to spend $1500, or even $500 (the CorelDraw buy-it-outright price) for hobby and occasional-use software like that.
And yeah, I use other things like Affinity Photo, which is Good Enough for many of my purposes, but it's just annoying to not be able to use the same software as my artists--unless they flatten the image before giving it to me, it's a crap-shoot whether I can import it in anything but the exact version of PhotoShop they were using.
It feels like extortion: I have to pay the artist to make the tiniest changes because I can't edit the original file, or I have to pay Adobe an outrageous sum to do it myself. Lose-lose.