I spent the first years of my career writing "no code" signal processing platform. It had it's ups and downs. We indeed had a lot of non-coder users and it made prototyping simple to moderately complex pipelines quickly. However relatively soon for anything actually useful we had to write more components, some of which were not really reusable because it's non trivial to hide the complexity. At some point most workflows ended up in a "python script executor", because in the end code is always more expressive.
Also, no version control. Or, in our case since we used XML for output: poor version control. Reading horror stories about Excel also confirm this, it is hard to do correct complex programs in visual environments.
I run a no-code agency, and my conclusions match yours: no-code tools are great for getting up and running, and for involving less technical people in process development. But sooner or later you'll need to do something that is outside the boundaries of the no-code platform's feature set, at which point extending it becomes a huge mess, because you then are forced to write code and hook it up to the no-code process in very strange and awkward ways.
Like you say, lack of source control is the biggest pain point.
Agree, Remind me of GWT (google web toolkit) - Marketing message sounds very good: Let your Java Developer write java, GWT compile it into Javascript and run it on the webpage.
In reality, it only works for "hello world" with 2 buttons. Anything non-trivial requires developers who are experts in Java, Javascript and GWT compiler/translator which translate JAVA to javascript. Debugging machine generated code was hell.
Eh, I built one GWT app by myself in 2011-12, maybe 40K lines of GWT code. First web app I'd ever worked on - all my existing experience was desktop (C#, C++, Java, Python). It worked as advertised, and let me build something useful. Another team has a GWT app that we are just now finally working on migrating. Anecdotally, I've seen user group discussions of very large enterprise GWT apps. So, the technology does actually work as advertised.
Having said that, GWT stagnated while the modern JS ecosystem evolved. GWT's debugger plugin died, and JS build and debug tooling kept getting better. React is a fundamentally superior approach to building UIs. And, while I'm biased, the React+Redux+TS combo is a solid toolset for building apps, and the ability to actually develop and debug code is way better than what GWT ever provided.
> Also, no version control. Or, in our case since we used XML for output: poor version control. Reading horror stories about Excel also confirm this, it is hard to do correct complex programs in visual environments.
Just because some visual systems don’t have version control doesn’t mean they can’t exist. It isn’t like text-based diffing just poofed into existence. Someone had to build them, and even then they aren’t all created equal. For example, I use P4Merge to diff and merge from Git instead of using the poor UX of the built-in terminal tools.
I would argue that visual programming languages are much more ready to be diffed and merged than text since they usually already represent a graph.
And yet, semantic diff and merge is a thing exactly because text is not universal when it comes to programming. Text is universal when you want to diff text but not when you want to diff programs.
Speaking as someone who's been working in the space of collaborative editors for a decade or so:
There are universal-ish formats for data (eg XML, JSON, etc). There are also sets of pretty standard operations for modifying data - for example, "insert", "move", "remove", etc. The same set of basic operations show up again and again for a reason - in ShareDB, Automerge, Yjs, etc. And you can use those operations to implement most applications.
I don't think semantic diff is ever what you want. Ideally you want your editor to capture the user's intent directly through the semantics of their actions. (Signal is lost reconstructing that in a diffing tool). But I bet visual programming could be expressed pretty well in a standard language of semantic changes. And then version control is something you could build on top of that in a pretty straightforward, and reusable way. (It'd be an awful lot of work - but I doubt there's unknown unknowns lurking out there.)
> Ideally you want your editor to capture the user's intent directly through the semantics of their actions. (Signal is lost reconstructing that in a diffing tool).
I'm trying to understand this part. Would you give an example of semantic diff losing intent signal?
An example is, say you have a counter value. There’s two types of operations users can make - either reset the counter to some value or increment it. The counter was 20 and now it’s 25. Did the user set the counter to 25 or did they increment it 5 times? When there’s only one editor, it doesn’t matter - the result is 25 no matter what. But if two users both edited the value at the same time (the other user changed 20 to 30) now we have two different results based on the users intent. Either the new value be 35 (20+5+10) or we should pick one of the results (25 or 30) and just converge to that new value.
You can’t tell what the users intent was by simply diffing the old and new contents. The right approach is to capture the users intent directly from the software that they use to edit the value, and then preserve that intent through the synchronisation system.
This problem is also easy to reproduce with edits on lists / strings which contain repeated elements.
This kind of works but users don't always express their intent in their edit actions, either because no available edit action can fully capture the intent or because it was simpler to achieve the desired state in a different way.
Changing 999999999 to 1000000001, do you increment by two or re-type the whole number?
It is difficult to make users think in (invisible) state changes.
Small commits are all about trying to preserve an explanation of change intent - but they're not ideal because you can end up doing a lot of incidental code just to keep them actually working if they get merged to master.
Whereas looking at the sum of a big commit, you just get a mess which doesn't tell you much of anything unless it's limited solely to inserting discrete blocks.
Whereas ideally what you really want to know is "there's 37 actions replacing the use of variable Y with a call to function X being passed Y" and the types are the same in all cases.
Sure, semantic diff is nice, but it is purely optional. One can diff programs as text and get pretty far -- after all, that's what practically every version control system does today.
The current products do have a bit of semantic knowledge, in the form of syntax highlights and function navigation, but those do not require full language understanding, fail gracefully if they are wrong, and often can be implemented with just a bunch of regexes.
This makes then writing a new text-based tool simpler (even if semantics is wrong, it is still useable). It also makes writing a new text-based programming language simpler (even though existing tools don't know my language's semantics, they can still work with the text including diffs).
That sounds like vaporware to me. I might be interested in adopting a visual programming system that had good version control. But so far they don't actually exist.
Graph annotations matter a lot for diffing I think. I think this is actually a good space to explore as it is a hard problem, and would require some serious UX research on how to handle this.
I feel that the one-dimensionality of text makes merging and reasoning about diffs simple, because they are easy to represent side by side. To diff something like an image, or a graph, you probably want to see them on top of each other, however that makes it harder to see the whole.
> To diff something like an image, or a graph, you probably want to see them on top of each other
I look at revisions of Keynote slides this way (side by side) via Apple's "Show Revision History", a feature I'm frankly much too reliant on. The only difference is that with code, editors actually highlight the changes between snapshots. Ideally, you'd want to add that.
For Excel there is Microsoft Spreadsheet Compare, which shows the files side-by-side, highlighting differences in data/formulas/properties, and also shows the differences as a list. It can be quite clever, e.g. figuring out that columns or rows have been added or deleted (much like a diff on a text file).
Going to guess GNURadio based on calling it a Python script executor that outputs xml. If that's true, I'd that it really starts to break down when the desired logic breaks from the "apply sequential steps to a stream of contiguous data" paradigm.
I worked for a startup that built a no code platform. The experience you had, was the same experience that the majority of our customers had. Its a general problem with low code. We tried to make the software easily extendable so that developers could refactor away the complexity, by easily adding new tools to the design surface. The approach worked quite well for internal projects, but was at odds with the marketing approach to the product, so it was never pushed to clients. Clients were sold a utopia of business analysts dropping some components onto a designer and hey presto. Low code can work, but in the same way as traditional dev, where code should be refactored into components that's are easy to reason about. The problem is that this is hard to sell. There is a very nice balance to be had with combining low code with traditional dev. Its just very hard to find that balance. Its far easier to write good code in a traditional language that in a low code environment. If you add to that the expectations created from low code demos, its mostly a recipe for disaster.
Difficulty in version control and testing is one of the reasons I don't think I could ever work in a no-code environment. It's an issue that I'm sure many people don't care much about but it's also a big reason I doubt no-code will ever overtake custom code solutions.
I have a similar experience as you. I worked on a Neuroscience/BCI focused signal processing platform, and also came up with the idea of having a "Python script executor". We used YAML though to store the pipelines. Seems like a very recurring problem.
The software I worked on has striking similarities to OpenViBE. I am surprised we didn't find the software you worked on when we were looking around. There are also other programs you might find similar - NeuroPype & Open Ephys.
Our first milestone was a speller similar to yours however we worked with MEAs rather than EEG. Small world!
I started building "no code" tools since 2004. I am also the author of codeflow - a not yet open sourced visual programming platform. I wish no code distinguishes between frontend no code(webflow, wix etc), backend no code(codeflow, mulesoft etc) and full stack no code(outflow, mendix etc). First of all this "no code" is being used for decades in fortune 500 companies and many of their major backend services are run on yesteryear no code. In fact it wouldn't be wrong if to state that all big financial companies use no code as part of their major backend stack (its called in different names such as middleware/soa/integration/etl etc), but they are all visual, drag-and-drop backend/business logic builders that would have required complex coding without them.
In my decades of experience there are of course many ups in no code, but as with any paradigm, the downs are highlighted the most and especially when you want to disrupt the text-based programming world that existed for almost a century now.
The major complaints about visual programming environment (backend) are:
1. Code is more expressive for experienced devs
Its true that code is much more expressive than generic graphical programming platforms for writing algorithms.. But how many of us write algorithms on daily basis?. What all of us do is glue various reusable pieces together to build the business logic. And for gluing graphical programming is actually superior - you can discover easily, dependent fields can be expressed easily etc. You need to use code in a properly designed no code system, but it's only 5% of the project. Now coming back to code being more expressive, if you look at algorithmic visual programming platforms designed for expressive coding like DRAKON (https://en.wikipedia.org/wiki/DRAKON), you can see that they are equally good or even better than coding.
2. No or degraded version control experience.
This is true as visual programs are still written to file as xml or json and there is disconnect between committing that vs seeing actual programs as graphs. This is a difficult problem to solve.
3. Incompatible or poor tooling
This is related to above point. Because visual programming tools need to distribute with a custom made IDE, the common tooling and CLI other available for generic text based programs can be lacking.
4. Hitting a wall
This is one of the biggest drawback I have heard from the devs so far. To be honest which programming platform have not made you hit a wall? In node.js you hit the multi-thread wall, in Java you hit the thread safety/performance(high memory usage) wall. In python you hit the performance wall. How do you solve it? Well in python you solve it using C or ASM libraries - and expose it to python. but you seldom hear a cry from python devs saying they reverted to C because python is slow or lacking.. Cos you learn to use the right tool for the right purpose. Similarly a well designed visual tool will let you do few things extremely well, but like no single programming platform can do everything extremely well, same is with no code or visual tools - for e.g. see DRAKON for writing expressive or algorithmic code.
5. Performance not on par.
Most of visual programs tend to not focus on performance as the priority is different. However this can be solved with compiling the graphs to native code.
When we built Codeflow, we tried to solve 4 and 5 and in our opinion we succeeded to good extend. 2 and 3 requires more effort from the vendor and the community (which is hard because every visual tool is unique). and 1 is possible but require more research.
One way to think about visual programming is this way - imagine a world where graphical vector image editing tools evolved without mouse - i mean you literally have to code everything - right from the pixel you need to edit, the selection - everything. Smart designers (or more of coders) became experts in it and they can produce some great quality images. The tooling is also great as the tool existed for decades with hundreds of libraries and ecosystem surrounding it. Now you come up with a new paradigm where you can use mouse and make it easy for novice designers to use it - there are two benefits you tout - More artists as opposed to coders can now create images - its much more easy to build the images. The coders on the other hand are not in favour of the new tool both subconsciously and consciously. Consciously because the new tool, while better in many aspects are no where near the previous precise tool in expressing the image - and tooling as well (version control, transformations, plugins etc). Subconsciously because somewhere they feel their decades of experience is not relevant anymore.. this is controversial but i have felt it when interacting with senior devs. (Similar to how many don't want to move away from vi/m to vs code for e.g).
On Mac, Apple apps and other "Mac-native" programs save revisions automatically in the background. At any point, the user can select "Show Revision History" and scroll through a timeline of changes. I'm not sure if this works with Excel, but it does with iWork/Numbers, and I use it all the time.
Now, Apple's implementation is just a simple timeline—there's no equivalent of version tagging, or git blame. But all of that could be done within a graphical environment similar to Apple's. It wouldn't be as advanced as Git, but I bet it could do what 95% of people actually use Git for.
You can also commit Excel files to a git repo. That lets you get back to old revisions and look at the commit messages. But you're not going to get a diff or git blame, which is what the parent comment really meant.
> But you're not going to get a diff or git blame, which is what the parent comment really meant.
You could though. Microsoft (or anyone else creating a "no code" platform) could absolutely build that into a graphical timeline view. Select these two snapshots, and show which cells changed.
It turns out to be an extremely non-trivial problem.
The hidden benefit of (text-based) code environments is that problem is well understood and well solved for information represented as two strings of lines.
Non-text code paradigms fight an uphill battle for utility because they have to each reinvent that wheel, and many of the solutions are not portable to similar problems. I've used the diffing tools in labview, and they definitely work but I wouldn't want to have to rely on them for a large-scale project.
This exists, at least for Word and likely for Excel as well
It is also horrible,hard to understand and not useful for most tasks you will use source control for
But is that inherent in any GUI-driven solution, or is it because Microsoft has never prioritized it as a first-class feature? I'd argue it's the latter.
It's certainly not trivial (as Git wasn't trivial), but conceptually, version control strikes me as one of the easier things to translate to a GUI-driven environment. And it's something that should be translated, because version control is a major problem even in domains which have nothing to do with code, e.g. graphic design.
It is not clear to me at all how to make a good version control at GUI apps.
Let’s start with diff - If this is a text doc, diff seems easy. Show old and new text next to each other, somehow highlight new and old version. Immediate problem: what if you choose to highlight “new” with purple + underline, but user is already using purple + underline as a part of their text. Do you choose a different color? Or just hope it is “clear from context”?
Now for spreadsheets: the formula has changed, how do you show this? You can expand the cell to show formulas, but those can he huge, so your document can become unreadable. Or maybe just highlight the cell and let user click on it to see the formula difference - but then you can no longer see the changes at a glance.
What about things with no text representation, like conditional cell format? Are you going to come up with text representation just for display purposes, or are you going to design special text formatting dialog which shows “new” and “old” values?
There are hundreds of questions about GUI diffs which simply do not exist in texts.
I found your response very interesting. It actually makes me think this is doable because "hundreds of questions" doesn't sound too bad. Many projects answer hundreds of questions.
I think these are design issues that you could solve with a bit of work. As an example:
> Immediate problem: what if you choose to highlight “new” with purple + underline, but user is already using purple + underline as a part of their text. Do you choose a different color? Or just hope it is “clear from context”?
Well, the first option that comes to my mind is dimming the full display, except for those rectangular regions which contain changes. I don't know if that's actually the design which makes the most sense, but a designer could mock it up as one of many ideas to see which one makes sense. (I work at a graphic design studio, so I'm fairly familiar with this process.)
TortoiseGit had Excel diff available out of the box the last time I used it. [1]
The official tool would be Spreadsheet Compare [2], which looks like it does a better job than TortoiseGit's Excel diff. But it seems to be only available for specific versions of MS Office, according to the website.
The best of both worlds would be to write a version of diff-xls.js that used Spreadsheet Compare.
My experience with visual programming is that, as the required complexity of the tasks grows even just minimally, it frustrates skilled users without really empowering unskilled ones.
That's because the unskilled stays unskilled for a reason: they can't or won't dedicate the amount of focus necessary to learn a technology to any meaningful degree. They will do the minimum they are (very explicitly) trained to do, and that's it. Basic scripting is not hard; if you cannot (or will not) do it, chances are you cannot (or will not) formalize even slightly-complex tasks in a visual environment either.
And of course, visual environments are a productivity killer for the skilled, who then feel frustrated and eventually drop back into textual modes. This is self-evident.
Sadly, it's a marketing dream, so it keeps popping up in all companies in my niche. "If only we build a no-code graphical UI, accountants will become programmers!" - no they will not. If they were really interested in automating tasks and modelling problems, they would have already learnt VBA, which is very simple - smart accountants do it already. People capable of doing advanced math are capable of programming if they really want. Accountants who won't learn VBA will not meaningfully engage with visual paradigms either, because they are simply not interested in these activities. What really needs to happen is to empower smart accountants to be productive without involving IT. That doesn't need visual programming, it just needs a powerful set of VBA apis unshackled from "enterprise" permission systems.
> That's because the unskilled stays unskilled for a reason: they can't or won't dedicate the amount of focus necessary to learn a technology to any meaningful degree.
The entire history of computers and software betrays this argument.
Every time we transitioned from a blank, text-command driven UI, to a visual one, we dramatically increased the percentage of the general population able to use something.
Think the transition from MS-DOS to Windows, UNIX to OSX, etc. By representing concepts visually, humans find them easier to understand.
In fact, I’d argue, most software innovation today is just making visual UI’s better. Think the single doc mode (photoshop) to artboard mode (sketch, figma). Single purpose (excel, word) to multipurpose (notion). Etc.
Notion is blowing up with 13 year old kids on Tiktok because it’s an easier UI to do spreadsheets and create documents and websites with. Which are fundamentally valuable things. In software, what is valuable, will be made easier over time.
Most B2B Saas tools are just crud apps. Not only is there no reason coding these apps has to stay complicated, but there’s economic incentive in making it easier.
Unfortunately, many people have a vested interest in keeping things complicated. Like people who get paid to create CRUD apps.
As the saying goes, it is difficult to get a man to understand something when his salary depends on his not understanding it.
I agree on some of your points but not all of them, and some examples are downright wrong ("Unix to OSX" has nothing to do with visualization and all to do with reliability). We are talking about visual programming, not generic interaction.
One thing that your examples tend to have in common is that they deal with describing states (documents), not modelling processes. Maybe that has something to do with it. Describing actions ends up effectively requiring flowcharts, which is what visual programming typically ends up being, and that gets painful and slow tremendously quickly. That's why visual programming tools do not endure: because once you get skilled enough, you will find them annoyingly constrained.
I can just point out that my niche (accounting) is arguably one of the oldest in computing. Visual programming to address accounting challenges has been tried over, and over, and over, and it failed. every. single. time. It's a massive target that everyone hopes to hit, because the jackpot is massive, but I would argue that it will never happen. Excel keeps winning because it doesn't even try to play that game: if you want to do anything beyond instant calculation, it gives you VBA and off you go. That's what people really want: if they are smart enough to want advanced process modelling, they are smart enough to learn how to write an "if" block.
> it is difficult to get a man to understand something when his salary depends on his not understanding it.
Believe me, there is also a large industry of programmers invested in visual paradigms; and their salary depends directly on not understanding the lessons of the past.
You make a good point re: states vs processes being more difficult. But on the other hand, I work with a team of marketers who set up fairly complex processes for dealing with inbound leads with no code marketing tools, so I don’t think it’s as far off as you think.
Just because your particular niche has certain complexities that make it more resistant to visual representation with current methods, doesn’t mean we throw out the whole no code movement.
After 20+ years of various companies trying to create a good no-code tool for html/css (eg dreamweaver, frontpage, etc), Webflow finally solved it IMO. So it can take a long time.
But I see no reason why this won’t travel further down the software stack eventually. And we shouldn’t be afraid of it like it’s a threat to us.
To be clear, Excel is a prominent and notable example of a visual programming tool. It uses a functional programming paradigm across a 2 dimensional data structure and has obviously been remarkably successful in it’s domain.
I understand your point, and I also see where toyg comes from. Let me try to phrase it another way:
Some tasks are inherently complex. Take data modelling as an example. Properly designing a database requires formal training. There's no way around it, and when you mask database design with a UI, you get the worst of both worlds. The complexity is there, unskilled users won't be able to tackle it, and the UI gets in the way of professionals with formal training.
Naturally, there is a continuum of tasks to be done, some can be solved with a poorly designed solution. If a user is empowered to generate a subpar solution, it is still better than no solution at all. That's the secret for Excel being the second best tool for any data manipulation task (for each task there is a better tool, but excel gets the job done in all of them).
The error here is selling visual programming tools as the next evolution step, which will phase out text UIs. It's not. It's a different tool, for a different use case, to be used by a different cohort of users.
My point isn’t that no code tools remove complexity. My point is that they make complexity easier to learn and deal with.
For example, Webflow, which is a no code tool for front end development of static sites, doesn’t eliminate any of the complexity of HTML and CSS.
It’s not easy or dumbed down at all. Most users need to watch tons of video tutorials to get started (if they don’t already know html/css).
But what the UI does do, is make it easier to learn those complexities.
It’s not offering a simplified set of dumbed down options. It’s offering a visual way to deal with all the complexities of HTML/CSS.
Personally, I’ve transitioned to building all marketing sites on webflow, not because it removes complexity (I want complexity!) but because dealing with the complexity is faster and easier with the tool.
> It’s offering a visual way to deal with all the complexities of HTML/CSS.
Uhm, a visual HTML designer, a field where we've not had any tool before (/sarcasm). But this is different! And you're totally not going to drop it once you've learnt the complexity and discovered all limitations. But hold on...
> to building all marketing sites
Ah yes. So the really complex stuff you'll still do with text. Got it.
> There’s other no code tools for things like mobile apps and CRUD.
You're joking, right? There are loads. They all suck to various degrees, which is why they are not popular.
> Ultimately, I think there’s a lot of fear
Mine is not fear as much as frustration for having to waste cycles on stuff that will not endure.
Take Power Automation (aka MSFlow): it's very powerful, but not because it's a graphical environment; it's because we have an environment with a lot of APIs available without having to do anything. If MS gave me a blank editor with all those APIs preloaded, I would be 1000000x more productive than I am fighting with this goddamn half-broken flowcharty thingie. Meanwhile, nobody else in my team (all non-devs) want to even consider look at it. They'd rather brush up on VBA if they really need to do complex stuff. And it's a shame, because the API wiring is amazing and when things eventually run it's magical.
Repl.it with all those APIs preloaded would be so much more popular than PA, which I fully expect will eventually die a slow death like Yahoo! Pipes and friends.
> and when you mask database design with a UI, you get the worst of both worlds
Visual tools don't inherently mask complexity. They can do, and many do, which is useful for some use cases. And they can also make things visible that CLI or text based tools generally don't. In that regard, they can support developers to complete higher complexity tasks.
I'm one of those people who gets paid writing B2B CRUD apps, so I certainly have economic incentives to keep CRUD in the hands of professionals. However, back in academia I spent a few years working on two different no-code platforms whose target users were PhD researchers, ie smart people with a professional interest in learning how to leverage no-code platforms.
Both projects followed the exact same trajectory: users ultimately rejected the no-code platforms and sent all of the programming work back to professional developers. Now the professional developers were hamstrung by layers of abstraction designed for non-developers and both parties were miserable.
Excel is the shining example of no-code enabling non-professionals to create powerful tools on their own, but it may be the exception and not the rule. Ultimately the challenges of programming are not the code, but thinking abstractly about data, and that's a real skill that developers have trained and non-developers have not, and I don't think it is solvable with tooling.
Good question, I'd have to say the pay off per effort simply was not there for them. It didn't help that they had the option of simply dictating the work back to us lower status grad students.
So one might say the issues were more largely organizational. But these weren't lazy people, they were always looking at new tools and techniques. And they were struggling to get any payoff with the tools we had. Sadly the focus of the project was not visual programming, so we were not focusing on the users issues.
>Every time we transitioned from a blank, text-command driven UI, to a visual one, we dramatically increased the percentage of the general population able to use something.
Is there an upper bound to the benefits of visual information in comparison to textual information when handling complex tasks so that when a task reaches a certain level of complexity it becomes easier to represent it in text as opposed to visually? The history of human communication would strongly suggest that there is.
Well, one could imagine a purely visual phone book, which contains a photograph of every person in town, as compared to a text-based phone book, which has every person in town listed by name.
The photo-based system would no doubt work great if the town had only a dozen people (all of whom know each other by sight), and probably work acceptably well for up to a few dozen. It's going to suck for a town with a thousand people, though, and be utterly unusable for a city of a hundred thousand.
A recipe takes a minute to read, and then you can easily look up the steps as you need them. Following along a youtube video will be way more frustrating and require you to jump between parts of the video, replay parts you missed etc. I don't think anyone would prefer the youtube video unless they were complete beginners who don't understand cooking basics.
You learn how to code in a couple of weeks, learning all the non-coding knowledge required to actually write useful programs is what takes years.
For example, you can't write most useful programs without knowing what a pointer/reference is. Doesn't matter if you use visual programming or textual code.
>The density of information delivered via video (visually) in addition to the word messaging (via audio) is roughly 10-100X that of an all text recipe.
I think you need to show your work on that calculation - because I'm worried it's going to be one of those 'a video has so many bytes of information per frame etc. etc.' claims at which point I would have to wonder why I care what color Julia Child's blouse is when preparing an omelet.
I think this is overlooking how faster is to develop for command line interfaces, as well as to how widespread command line software is.
What are the numbers considered when making these assessments? With visual software, we got way more adoption, but this doesn't mean the amount of software necessary increased. I'm pretty sure the majority of the software out there is made of command line tools. The amount of software a developer uses is very large and entirely made of small command line tools, there are entire O. S. made purely for command line usage (any server distribution really).
Finally, there is another giant elephant being overlooked. Software is written in text because in the hands of a skilled professional, text can be VERY visual. I don't have to prove this, there is extensive amount of poetry written to confirm this.
Effectively a piece of software written by a skilled professional will greatly benefit every reader, and readers of software are way more than writers of software even in the software development world.
Your example about HTML and CSS is also a very special example. Effectively HTML and CSS are tools to build UIs, not "generic software". There is a lot of software out there that doesn't involves humans at all. I mean, cron is on the majority of the servers out there, it has no UI.
That sounds like the trite old University Professor that says to 4/5ths of their students that didnt get an A: "You lack the passion and maybe you are not made for this".
@murukesh_s post wrote the retourt to that. Pointers and Windows had a substantial impact on shaping the reality we live in now.
Putting all the responsibillity on the users shoulders and their "intelligence in math" is way to ego focused IMO.
Read it again. What I wrote is that accountants are smart. They already know how to deal with numbers ad variables, and control flows are trivial to learn in something like VBA. If they don't do it it's because they don't want to, and no amount of forceful simplicity will change that. In fact, visual programming often frustrates them more than it empowers them.
murukesh_s is obviously invested in this, so he's hardly an impartial observer, whereas I don't really care one way or the other and am just relying the experience of somebody helping users with these tools.
When I started my last job, I thought I was going to be writing a lot of Python and C. It turned out that the position had a lot more React and TypeScript than I expected, and at first I was annoyed and afraid. I wasn't a frontend developer---or worse, a designer---but I didn't have much of a choice, so I dug in and learned the stack.
At first I resisted every change. What good is VS Code when I have Vim? Why would I learn TypeScript when vanilla JS has "worked" for me for so long? What's a Webpack config?
Once I began using the tools that my coworkers recommended, I started treading water and even swimming with purpose in the ocean of Web UI technology. I still have a lot to learn, but I probably would have kept on avoiding this area if my situation hadn't forced me into it. Letting my guard down and following the trends in my group helped a lot in this case.
The best lessons I learned during that period are that learning can't kill me and using good tools doesn't make you a bad engineer.
I'm glad that you had such a positive experience! After years of web development I'm just burnt out by the tooling.
Layers upon layers, just make debugging so unnecessarily hard.
The tooling is brittle and buggy.
I've seen typescript compiler bugs, webpack segfaults, and whatnot.
I've started to ban typescript and jsx from all future projects, and it's better, but still a nightmare.
I’m on the fence but painfully with both feet on the ground. However it wouldn’t take stiff wind to knock me back on to the plain ol JS side.
I’m currently “rewriting” a vue.js app for the sole reason that we’ve just lost a senior dev who was the only one who could stomach the thing. We’ve taken on two juniors in his place and there is absolutely no way they would be able to dig into this thing.
The process has been quite enjoyable and we’re just about at feature parity at 1/10
LOC. And the juniors are quite keen at picking up typescript and lots of other useful things along the way.
Had they just been dumped into the vue pool, things would have turned out much differently.
I await the day a few months from now when they “discover” this new thing called vue and want to rewrite the entire thing!
As a mainly frontend developer, I agree. I've spent more time configuring tooling, than writing code, in this new project I'm starting. I don't want to write plain JS, but the top used frameworks have strayed so far from basic JS that it's getting a bit ridiculous.
Svelte appears to get rid of some of the boilerplate and verbosity stuff you find in other frameworks, though it's still a pretty magic framework. Looking forward to trying out SvelteKit.
I've seen this take a lot lately, and frankly most of them are straight out lies. Your post may be one of them.
The top three modern frameworks all use cli-tools that do the config for you. Most are astonishingly simple to use. There are the rare times you have to venture into using custom webpack configurations, but they are far and far between.
You say the top used frameworks have strayed far from basic JS, but that is not true. Most of them are 90% vanilla JS with the exception of Angular. Hell, even React Components are simple JSX transforms. A component transforms into React.createElement(<name>, <props>, <markup>)
Idk what projects you are working on that don't require any configuration.
I just started a new Vue 3 project with typescript, Vite, VuePrime and some other dependencies, and there were a lot of undocumented steps and annoying issues to get everything working. JS-shims, beta version browser extensions, colliding ESLint rules, buggy dependencies that get compiled into broken JavaScript, etc.
How do you mean? If there’s a repeated pattern that is hand written al over the place without people realizing it (or not making sense within the local context to build something more general), centralizing that pattern certainly destroys complexity, does it not? The pattern existed before and after but the centralization means you have only 1 instance of this pattern.
That wasn't complexity, it was repetition or verbosity. It's tedious to manage rather than complex.
In making it DRY you have introduced a dependency for all of the usages of that snippet of code, and made it harder to have individual uses deviate if they need to.
Of course that might be exactly what you want! It's just good to be aware that code reuse is adding complexity by way of adding a new system to manage.
A little function, no big deal. But if you find yourself writing models and extending classes just to save yourself a couple of repetitions you may have jumped off the deep end!
You now have to integrate that pattern/module, and learn/work with the tooling to integrate the pattern/module, which adds new complexity and constraints on top of the now-centralized hidden complexity.
On the whole, it’s worth it - building on the shoulders of giants lets us achieve great things with what used to require out-of-reach amount of resources.
But as a result our work has shifted towards more integration and tooling (everything from node/npm to cloud services, orchestration, containers, and ML/AI) and total complexity keeps marching on upwards.
The problem is that Typescript seems amazing when you're first starting out, and is especially appealing to devs coming from strongly typed languages - but, it's a productivity drag almost immediately (for seasoned JS devs), and as the software gets more complex you either get more and more type spaghetti or devs who spend days figuring out just how they're going to make that one type elegant.
All this to maybe catch one or two bugs, since the boogeyman of accidental type abuse rarely makes an appearance.
Some of the sacrifices made to turn it into a superset of vanilla JS come back to bite it as well. I think banning it from projects is a very wise move, but it's the kind of wisdom that's counter-intuitive and requires more of a business sense of things.
I think you’re significantly underestimating the bugs that could be trivially caught with types. For example, Airbnb stated a while ago that 38% of their bugs could have been prevented with TypeScript[1]. Types aren’t the solution to all bugs - this is why we have strong test cases as well - but they bring a lot of benefit, especially if done with diligence, not just to prevent bugs but about to aid in understanding the system as a whole (seeing the types of an object can make it much easier to understand what data is being passed around).
Postmortems like this are highly suspicious. There are many, many questions such as whether these bugs could've been caught simply via linting, and whether whoever did this research engaged in p-hacking.
I could not find a single paper - the only reference is a slide in a presentation that may well have been pulled out of someone's ass.
Anyway, I do concede that typing has its value, but in a large project, the cons ironically outweigh the pros (for TS specifically). I'm sure other JS typing systems are actually better at that, especially the ones that aren't trying to be JS supersets.
I don’t know about your experience but TypeScript has saved my team an enormous amount of time and resulted in the near complete elimination of showstopping bugs on deploy at my company. We’ve had maybe one fire in two years of rapid growth and a large part of that is thanks to TypeScript.
I hate to be the guy who say back in my day... but back in the 90's there was a tool by Borland (anyone remember them?) who created a no-code, low-code tool called Delphi. It was awesome, I used it on a few front end projects and it active data source tools made laying out complex UI's so painless.
Of course this was back before Rest APIs and the internet for most part. But it kicked some serious ass.
Speaking as a once professional Delphi programmer, I can assure you that Delphi programs have lots of textual code in them. The IDE generates empty functions for event handlers, but you have to write textual code to do the actual handling.
You could theoretically create a database editor without any code. Drop a database connection component, connect a datasource to it (which defines table name), create a "scrollable box" and drop a text edit for every field, drop a few buttons and give them special functions like "save" and "cancel", and you should have an approximation of Microsoft Access form, but compileable to standalone binary.
I don't think anyone ever made projects purely in GUI though, there is always a need for some custom code to be written.
Unless I have stumbled across something different it appears as if this is still being developed. I have no idea if what I have linked is even remotely close to what you remember it being.
This was my first exposure to development when I got my first PC. Delphi was amazing for creating User interfaces,as those were the days were 99% of windows apps looked the same ( except Winamp). I even managed to create some useful programs for myself. Shame I dropped it and moved on to 3D modeling for a bit and only returned to development more than a decade later. Who knows what would have happened if I haven't dropped it then..
Visual BASIC is even older than Delphi and was the benchmark for low/no code app development. We called them RAD tools, "rapid application development". If you really wanna go back, hypercard on Macintosh was kinda the original that inspired them all (and even inspired the web we have today).
There’s also Clarion, whose platform is intended to generate code for you and you input your own custom programming and business logic if needed for a basic B2B CRUD desktop app.
Netbeans, sorry. Clearly, I have not worked with this tools in ages, either. Basicall netbeans allows you to draw GUI-apps, and the IDE will insert a bunch of boilerprlate java-code that will create those GUI interfaces -- similar to Delphi.
Pretty much all of the examples you used of codeless tooling (drag 'n' drop programming, visual state machines and graph editors, database FK diagrams, etc) still still require an undergrad-level CS education - or considerable experience - to understand and use correctly - to that extent they're just time-saving tools that use a graphical UI to hand-wave away tedium in software development (not to denigrate them: time-saving tools and eliminating tedium is important!)- but I argue it's a disservice to everyone to let the uninitiated use these easy-to-use tools for production systems without necessary formal introduction (of course they're still a fantastic didactical aid).
A great example of GUI winning over code or the command-line is a UI to allow people to arbitrarily reorder items: a drag-the-grab-handle UI is a hundred-fold better than manually entering new sort-orders for each item in a command-line array or other UI. This is, at least, the very kinds of things that visual tooling should be used for by everyone from Scratch-lang kids to SV elites.
Another example you gave is of "visual interface editing" - but while you acknowledge the problems that FrontPage and Dreamweaver had decades ago, WYSIWYG is still dead, having retreated back to its origins in print-media design. The old VB6-era "drag controls from the toolbox onto the form surface" is dead because one cannot easily express any kind of non-trivial layout logic using that approach: things like CSS' flexbox, CSS grid (with auto-placement and auto-flow) and @media queries absolutely require one to grok CSS' visual formatting model, which in-turn implies someone will already be more expressive by writing CSS by hand instead of using a mouse to choose from some predefined options (I appreciate XCode's Interface Builder and constraints model is very nice and shows how it can be done right, but explaining how the constraints system works and how it should be used ends up with me having to redo the work of the designer on my team. Ugh.
Another issue with visual-tooling is that unless the tooling vendor happens to control the spec that it's generating code for then the tooling will invariably and inevitably fall-behind and have limited value for more advanced users (e.g. TopStyle) - or if the tooling is meant to abstract-away multiple different backends then the tooling is doomed to support only a common denominator and won't be taken seriously by too many people (e.g. MS Word's Save-for-Web) - we've seen both of these happen with visual tooling for the web, especially with Chrome's rapid frequent releases and the everchanging spec for CSS. Web-design is now even more inaccessible as it was in the late-1990s (and is no-longer the butt of jokes: turns out the CEO's 12 year-old nephew can't apply CSS selector specificity rules - which, I note, is another angle of CSS that visual tooling just can't accommodate).
...so my point is, in light of what I said above, how much consideration should we really give to "no code" tools in each category (e.g. database/E-R, drag'n'drop block programming, GUI designers, etc)?
I've had a lot of frustrations with "APIs are eating the world" over the last two years.
1. "No code" interacts badly with "infrastructure as code." Instead of a client library, you get a web admin console. If you want your configurations to be reproducible, durable, and auditable, you now have to build a client yourself or use a third-party client. Case in point: Snowflake directing users to a buggy third-party Terraform provider built by some random organization that's always several versions behind.
2. "No code" is much harder to test. You have to deploy changes to see them. You don't have the tools to mock them out. There is no local testing. At best, you can have a test environment. But to do end-to-end testing within it, you need to rebuild basically a mirror image of prod, since you can't mock any API boundaries nearly as easily anymore. As a result, a lot of "testing" ends up being done by pushing to prod and trying to roll back when needed.
3. "No code" often means surprise upgrades and feature changes that are out of your control. Some vendors (e.g., Databricks) mercifully display a toast in the GUI prompting you to update. Many just post some release notes somewhere and call it a day.
So, using Unreal Engine you can literally make an entire 3D game with full next-gen graphics without dropping a line of code.
Now, shipping it to console will most definitely require some C++ and so would Steam I think.
But "Blueprints" are a visual programming language that seriously kicks ass. I believe the end product even compiles down to C++ classes so you don't lose much performance in release mode.
The only thing where it makes no sense is algorithms like A* Path finding, which are far more suited to be written in code than a gezillion nodes linked to each other in a screen of Spaghetti.
Just yesterday marked the 1 year anniversary of Dreams, a PS4 entirely visual development tool.
The concept is similar to the UE4's blueprints, but even more visual and with a much more "general" language approach.
(it also combines 3D modeling and music creation in the same editor).
While I think code is faster and more direct, it also require a quite specific type of person to "enjoy" it. Most people don't.
Humans are mostly visual creatures, so that's a great way to approach and create a piece of software, without having to learn an entirely alien dictionary and grammar just to thinker around.
I built a small game with blueprints and it was truly amazing.
I don't use them anymore because I am just so embedded in thinking of things in code, it became tedious to constantly convert code thoughts into blueprints. But I was definitely impressed!
I would probably use it for projects that are mostly art and only a little bit of logic. A walking simulator style game for example. But if I am building out bigger systems, I feel more comfortable in code.
I do work in the Salesforce ecosystem, and they really push "citizen developers" and "administrators can write apps with low-code tools" and they push their low-code tool, Flow, as much as they can.
I hate the clicky-draggy-droppy so much that I'm learning Racket to make a DSL that will write the XML's that Flow code is when you download it locally.
I definitely feel that. The number of weird database issues that come up with flows (SOQL limits that non-developers don't think of) is mind numbing.
My take away is you can't mix custom (APEX) code with flows without a lot of effort. At that point, giving people a database and an excel license seems the more honest route.
When I started with Salesforce 6 years ago,I looked at Flow, I thought 'screw this' and went on learning Apex instead. Even. after all these years I barely touched it.
Mainly a matter of "all the cool kids are using Racket for DSLs", more seriously the web sites, documentation, YouTube vids, Q&A's etc. say that one of Racket's specialties is writing DSLs and languages. I've used Lisps in the past and I can see how they can be used to turn their internal representation, S-expressions, into XML.
In my case it doesn't need to be performant. I'm turning one file format - my language, starting with S-expressions, then a more palatable syntax - into the XML that will be uploaded to Salesforce.
Isn’t this all on the same spectrum as all programming? Every programmer is using someone else’s code at all times. Even if you are writing in assembly, you are still relying on a compiler to turn your written words into machine code.
It's sort of a spectrum yes... (or, really, there are infinite spectrums like the one you are imagining,) But there are also discontinuities on all of those spectrums.
The ergonomics of a declarative API are not continuous with the ergonomics of a procedural one. There's just a gap there which, when you cross it, you lose a bunch of things, and get a bunch of other things for free.
There'a also a huge discontinuity when moving from code to point-n-click, which is what the article is about. Many of the affordances of code cannot be replicated in a GUI, with ANY amount of effort. And vice versa.
Still not sure what distinguishes "no code" from pretty much every other software. Isn't Excel or Blogger "no code" tools? Why are these newish GUI based applications in this "new" category?
Blogger doesn't use code, but there's no does-use-code blogging equivalent. No-code tools are specifically meant to fit in the role of a programming language in application development, e.g. instead of writing your web application in python, you use a GUI no-code tool to build it.
What you want is "low code". Or less rhymey but more accurate: "minimal code".
Historically visual programming tools did not create this. In fact they generated very bloated code. So visual programming tools got a bad rap.
But there's no inherent reason why they need to generate bloated code. And if well done they can indeed generate minimal code (and program synthesis advancements will help a lot here).
We worked on this from ~2010 at Nudge and we eventually solved it with the discovery of Tree Notation.
I don't get it. The descriptions are pretty wild and make little sense to me:
> Tree Notation is like a 2-dimensional binary and could be used at the lowest levels of computing.
But in practice, we have:
- Tree Notation, both data structure and a textual data format. A data structure only has one datatype, which is string value + a list of children. A textual format uses significant whitespace both in the beginning of the line and between the values. It seems to really hate punctuation for some reason, there are no quoting mechanisms or even quotes.
- Tree Language, a parser/lexer combo which generates AST in Tree Notation format. It allows embedding Javascript in the generated output.
- A bunch of examples made using the parser above -- there is some plotting tool for web data, HTML generator, and few toy DSLs?
Overall, I don't see anything particularly unusual about it. I see no advantage of Tree Notation data structure compared to usual string/int/list/dict core types most languages have. I see no advantage of Tree Notation textual format compared to others -- while it may be OK for very simple cases, I have a feeling that the lack of quoting and inability to wrap lines would get annoying fast. The Tree Language grammar does not seem very elegant either, it certainly seems harder to read than BNF variants.
Your observations about the specifics are fairly good.
The big idea is about dimensionality. All of the TIOBE
languages are 1-dimensional. Parsed as a linear sequence of
tokens, transformed to an AST, compiled.
I'm claiming that these are chasmically inferior to 2 and 3
dimensional languages. Tree Notation is one very useful
implementation of this bigger idea. In Tree Notation the
shape of your source code is immutable during parsing. The
AST and CST have the same shape. More abstract nodes stack
up the z-axis like you would stack legos. Tree programs are
isomorphic to geometric trees.
Human brains parse text in parallel, in 2+-dimensions, and
rely on geometry for meaning (as opposed to position in a
linear sequence). Computers can to. And my bet is that
they will https://longbets.org/793/
It looks like a toy, but this will change everything.
> Parsed as a linear sequence of tokens, transformed to an AST, compiled.
So are the Tree Notation programs. They are parsed as linear sequence of tokens -- when I look at the https://jtree.treenotation.org/designer/ , the inputs are 1-d sequence of characters separated by spaces and newlines. Those are parsed one at a time, and make a tree structure.
If the "2 dimensionality" refers to the fact that you have lines (Y dimension) and each line is split into words (X dimension), then this is not very new either -- this is how TCL and Unix shells view the world.
> I'm claiming that these are chasmically inferior to 2 and 3 dimensional languages.
I see no evidence of it. What can Tree Notation do that lisp/scheme can't?
> In Tree Notation the shape of your source code is immutable during parsing.
Are you talking about how characters on the screen correspond to nodes of your AST tree? If yes, than this seems extremely limiting. Each screen line is not that long -- probably 120 characters is a reasonable limit. What if you want a longer payload? What if in the output of your program has nodes with longer payload which no longer fit on the screen?
Or is the idea that you take complex tree language, like s-expr, and _restrict_ it so it is representable as punctuation-free text? I guess this could be cool for education values, but I fail to see how such restrictions make the language "chasmically superior"
> The AST and CST have the same shape.
Like Lisp programs?
> Tree programs are isomorphic to geometric trees.
So are Lisp programs.
> Human brains parse text in parallel, in 2+-dimensions, and rely on geometry for meaning
Are you talking about character recognition or text parsing? The former is in parallel, but the latter is not. You can test it yourself trivially -- press the Zoom button in your browser. The font size changes, all the words move around, but you can still parse the text just fine.
> Computers can to. And my bet is that they will.
They already do! All the image recognition tasks (including character recognition) look at all the letters in parallel. You might have heard about this technology, it is called "neural networks". Moreover, they made special chips which work on a huge _2-dimensional_ matrix of numbers all at the same time!
> but this will change everything. Or I could be wrong.
I am not sure what is there to change, given that every part has been successfully used for many tens of years.
Today's machines impose a way of doing things. But in the future
there will be a new type of machine. In the interim we can harvest
a lot of benefits from Tree Languages by working within the constraints
created by today's register model of computing.
> on the screen
The vast majority of research and experimentation with
Tree Notation does not happen on the screen at all. Experiments
are done in higher dimensions in different mediums.
> So are Lisp programs.
No they're not. There is a proof out there somewhere from 2017 proving why they are not.
The 2-D layout is not an "afterthought". It is not a "beautified" version of Lisp. The 2-D layout is an essential ingredient of Tree Notation. You can change the rules of Lisp/SExp to achieve the same 2-D layout, and what you end up with is Tree Notation! So while Tree Notation can be thought of as one way to write S-Expressions, it's a very special way that has a 2-D geometric isomorphism that (this does not have). And that goes on to make a huge difference.
> Today's machines impose a way of doing things. But in the future there will be a new type of machine. [...] working within the constraints created by today's register model of computing.
Which "way of doing things" would that be, and how exactly is this constraining us? Sure, most physical CPUs do use registers, but a lot of languages define their own machine model which are not register-based. We have stack-based machines, vector-based machines, matrix-based machines, the functional languagues with tree evaluation models, declarative programming, and lots more.
> No they're not. There is a proof out there somewhere from 2017 proving why they are not.
faq> It is largely accurate to say Tree Notation is S-Expressions without parenthesis. But this makes them very different! Tree Notation gives you fewer chances to make errors, easier program concatenation and ad hoc parser writing, easier program synthesis, easier visual programming, easier code analysis, and more.
This looks like a restricted subset of lisp's s-expressions. And the explanation only talks about incremental improvements over s-expr, not radically new capabilities.
Is there a better document I have missed?
> it's a very special way that has a 2-D geometric isomorphism ... that goes on to make a huge difference.
Are we talking about geometric isomporphism in the mathematical sense, as in graph embedding on a plane? If yes, I am very interested to see what kind of useful results one can get out of it. While graph embedding is useful for certain problems, it seems generally inapplicable for the common computing tasks.
Or even better, is there a sample code which demonstrates the advantages of isomorphism? The biggest sample code I found was "Grammar"[0] and it seems to be sadly one-dimensional...
This. Imagine you didn't have to transform a program
into a thousand permutations to execute on 1D registers. Imagine
just loading the program into 2D/3D registers and having it
compute the result in a single cycle.
AFAIK no one has ever built these higher dimensional registers before
(done some searching, asked Vinod once, and a number of other cpu folks
but no one gets it), but I'm very confident (10%) they will work and be
better than quantum, but I don't have the money to fire at a rocket pace
so this will take time. I've pitched it to DARPA, and do expect them at some point to take it up.
> I searched for this proof
Sorry, I thought I made a latex version of this somewhere. I found
some old notes that I uploaded:
> Or even better, is there a sample code which demonstrates the advantages
> of isomorphism?
One of the more recent ones that starts to hint at the benefits of a 2-D
syntax and the isomorphism is this video: ("If Spreadsheets and Programming
Languages had a baby")
> Imagine just loading the program into 2D/3D registers and having it compute the result in a single cycle.
You mean like SSE extensions in Intel CPUs, which can operate at 64 bytes at a time, potentially doing 64 operations in a single cycle?
Or FPGAs, which can have an arbitrary wide registers -- I have seen a design where entire video scanline, 600+ points, is processed in one cycle?
Or video cards, which have multiple parallel executors -- a high-end NVIDIA card can process 64x64 pixels in parallel, with each pixel getting its own little execution core.
Or maybe famous Cray 1 machine [0], which had a vector instructions, such as "a single instruction a(1..1000000) = addv b(1..1000000), c(1..1000000)"
Or maybe a Connection Machine? It sounds as close to 3-D machine as you can make it -- it even had 3-D shape with bits arranged in cube [1]:
> The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes.
Your say that "no one has ever built these higher dimensional registers" -- but as you can see, there are plenty of examples one can come up with. It is entirely possible that your idea is different from all of these, but it is not clear how. I think your website would benefit greatly from the comparison with those other computing devices, as well as more details as to what this "tree architecture" looks like -- how many ALUs, memory blocks, instruction decoders etc.. do you want to have and how are they interconnected.
It doesn’t. I think just often, including in the OP, the term no/low code is used to speak about the visual subset of the problem. I agree with you in that higher level text abstractions are low code as well.
If they are going to write non trivial code, they better be developers.
Corollary: all trivial code expands, little by little, into a non trivial beast. Years later, developers dealing with the mess are going to ask why things are the way they are, and they are going to be told "this was started by people with little coding experience, it was supposed to be a throwaway minor thing..."
What you want is "low code". Or less rhymey but more accurate: "minimal code".
Kind of. What I really want is something that can round-trip to a textual representation and is heavily visual with the possibility of adding custom actions. I guess a BizTalk Orchestration built from the start to be a programming tool. I often think in my retirement I am going to take a crack at doing Excel as graph of tables and components.
Haha, I checked out tree notation in more detail just now, and I actually agree absolutely with one of the underlying hypotheses: that whitespace is essential for languages.
You are taking it to the extreme by immediately just looking at one particularly easy to parse use of whitespace that directly corresponds to trees where inner nodes can have data too.
My work is very related, I was also inspired by the fact that whitespace is essential. I have developed a grammar semantics that is more expressive than both context-free grammars and parsing-expression grammars, and unifies lexing and parsing by seeing them as two different sides of the same coin (namely, terminal symbols, and nonterminal symbols). It is basically just a generalisation of Earley parsing, and some of the puzzle pieces just fell into place for me a few months ago while I was writing a grammar for a Markdown like language. One problem is that it possibly inherits the same performance problems as Earley parsing, but I am thinking that this can be fixed by having the top-level(s) in a simple tree-notation like yours that can be parsed in parallel.
You can also write grammars that accept any input, and so you can check for errors just by checking if certain grammar symbols corresponding to errors are present in the parse tree or not, but the parsing itself will never fail. But I don't have an automatic way of checking if a particular grammar has that property.
I also agree with you in that trees are really important :-)
I am working on a tool and libraries for working with my grammars, and I've decided that all output will be standardised as trees in a simple matter by classifying grammar symbols along two different axes: a) flat vs. nested, and b) auxiliary vs. visible. Based on that division, parse trees are created automatically. This makes things much simpler for now, and (I hope) allows for more composable and modular grammar design.
I feel like I'm effectively writing a "no code" tool at the moment; it's a GUI whose user inputs will eventually be output as a configuration file that is fed into our core system (mobile network signal processing, in the broadest sense of the word).
In theory, end-users and operators could work in the configuration files directly themselves (it's a combination of XML templates and files, and .ini-like config files). The GUI is there to prevent footguns and add convenience.
Anyway it's not very visual, it's a lot of forms and pop-up segments with more forms, but still, one could argue it's a no-code environment. I should look into the area.
Mind you, I'm working on a rebuild and at the current rate it'll take me years to even start thinking of doing anything more advanced than replacing the existing forms.
[personal rant[ I'm not sure if I want to spend the next years at this company either; they're doing well enough financially but it's a slow burner. They're hiring, but the wages are stagnant and dare I say below market rate. I'll have to review in 6 months or end of year.
But I can see the lure for masses, driven by the startup subculture, but the truth is that if you can master something like bubble, the transition to actual coding would be seamless and often very liberating.
I could only stand Bubble for a day or two before it became too restrictive and yet complex for doing very basic stuff. Buffer suffers from the same issues as other no-code solutions but also has a subpar UI.
Yeah I agree, there's lots of bloat as well, that's why I said that the transition of bubble tinkerers to actual coding with any high level languages and frameworks would be very liberating. Even a beginner could do a heck lot in something like RoR or Django, without even mastering the respective languages.
There is a place for code, no code and eveything in between. Code is more expressive. No code is generally faster (in the limited domain for which it is intended).
The big mistake comes when no code tools are overhyped, as happens so often. No code tools are only ever going to be applicable to relatively narrow domains. Trying to make them completely general, like a general coding language, is invariably a bad idea.
My https://www.easydatatransform.com/ tool is primarily a visual drag and drop tool for transforming data (join, filter, pivot etc). But one of the 50 transforms is a Javascript transform, so you can drop down and do some coding if you have a really obscure data transformation. That seems like the best of both worlds to me.
Sure and Oracle Forms, Designer and APEX, Hypercard, VB GUIs, Django admin forms, Rails default forms etc etc. All code eventually and I saw some horrendous stuff in Oracle forms in the 90s
SSR for frontend libraries was the queue for the current cycle to me. The description was "...and then some smart people figured out you could render the code on the server first!" So here we are, rendering UI components with server side languages again. But this time, it's Javascript!
One thing I never saw coming was that frontend technology would eventually take backend developers jobs, and that backend developers would be replaced by a mix of frontend devs and human AWS configurators.
Also, no version control. Or, in our case since we used XML for output: poor version control. Reading horror stories about Excel also confirm this, it is hard to do correct complex programs in visual environments.