Hacker Newsnew | past | comments | ask | show | jobs | submit | alsiola's commentslogin

Used Astro for a pro bono project. Found it fantastic, well documented, provides solutions for the hard parts, gets out of the way for the easy parts. Documentation is well written, but I find I don't need it much because mostly it works how I would expect.


Been here seven years. Next is hot garbage and you couldn't pay me enough to work with it.


> the opposite of being under appreciated

> despite there being alternatives that are better in almost every way.

This right here is the under appreciation. The new way to signal to others on forums that you are a really really great dev seems to be to bring up how much better some bizarro templating engine that abuses a niche JS language feature is.


React has fundamental problems that lead to both:

- horrible performance characteristics

- needless complexity

These are not tradeoffs, these are bugs. We don't gain anything from them.

That's why React introduced a compiler. Because problem 1 is a big deal. But it's not a code problem, it's a React problem. Other tools simply do not have that bug. Which is why the exact same react code can be compiled and run much faster.


You haven't described those "fundamental problems" that you call bugs, but I think these are irrelevant for me from a ClojureScript point of view. As an example, immutable data structures mean that equality comparisons are cheap and I can often avoid re-computing and re-rendering huge parts of the tree.

More importantly, I don't have a React performance problem. I don't really need "much faster".


> More importantly, I don't have a React performance problem. I don't really need "much faster".

Sure, but ultimately you're using a library with performance bugs that lead to orders of magnitude more rendering than necessary.

If you don't mind the buggy software, that's fine. It's still buggy.


I'm curious what makes a template language bizarro, and why JSX is or is not bizarro?


JSX is just sugar around JavaScript, and interops nicely with it. I'm okay with that. The more I write JSX, the better I become at the programming language I'm using. Concepts and patterns in JS can be adopted in my components.

If I learn Vue's templating language, then I'm spending my time learning a system with no wider applicability, a much narrower tooling space, that doesn't utilise my previous or future experience from JS. That's not a good calculus for me.


I don't understand how Jsx is syntax sugar in a way that vue templates aren't. Neither of them are valid JavaScript but they both compile to it.


A concrete example then. Commonly want to prevent form submission default behaviour.

Vanilla

  <script>
    const form = document.getElementById("form");
    form.addEventListener("submit", event => event.preventDefault())
  </script>
  <form id="form">...</form>
React

  <form onSubmit={event => event.preventDefault()}>...</form>
Vue

  <form @submit.prevent="onSubmit">...</form>
React's API has guided the developer to learn about events. If they move outside the React ecosystem they have transferable knowledge. As someone unfamiliar with React, but used to the DOM you're surely comfortable here. Yes, the syntax isn't identical to how you might use this in vanilla JS, but it's clearly the same concept. It's just been made a little nicer to use - the sugar.

Vue's API has reinvented the wheel. There's one place this syntax is useful and one place alone - Vue itself. It hasn't used my existing knowledge, or pushed me to become more familiar with the platform upon which I'm building. That's not sugar, that's a new language.

I've probably got the vanilla example wrong - when you don't do it frequently it's not the most ergonomic thing in the world. React takes that API, doesn't deviate far from it, and makes it easier to use. Sugar.


Fun example! Strange conclusion. React actually uses a synthetic event system that is subtly different from the native one in all kinds of little ways. In reading the docs it’s hard to even get an overview of what’s different. Bubbling is a bit different, onChange works like the input event for some reason, various props and methods have been added. This is not the case for Vue! It just uses standard events.

The .prevent modifier in Vue is completely optional, you can call .preventDefault() yourself. Note that React also uses a kind of modifier but only for capturing events (onClickCapture etc). It does not have any way that I know to add a passive event, for some reason.

Vue is the one that actually offers syntax sugar, and does so much more consistently, with the semantics identical to the browser. React changes the semantics for unclear, historical reasons, and then adds half-baked syntax sugar on top.


I'm not claiming React is perfect by any means, and like any popular relatively longstanding project is is bound by sometimes unwise historical decision. It just seems to be currently in vogue to take a pop at it. If you want to extol the virtues of Vue/Svelte/whatever then great, but React is still IMO a great option and deserves some defense.


This seems like a very hostile and uninformed take on the alternative tools.

Have you tried building anything with Vue or Svelte recently?

Can you provide some concrete issues you ran into beyond them being “bizarro”?


I consider this example fundamentally broken, in a non-obvious way that reflects, in my opinion, a poor API choice.

  <script>
    let numbers = $state([1, 2, 3, 4]);

    function addNumber() {
      numbers.push(numbers.length + 1);
    }

    const sum = numbers.reduce((x, y) => x + y, 0);
  </script>

  <p>{numbers.join(' + ')} = {sum}</p>

  <button onclick={addNumber}>
   Add a number
  </button>


> Besides, in the UK, SMS spam is almost non-existent in my experience.

This is not my experience. Perhaps 3-5 years ago was the peak of SMS spam, but I still regularly (1/week minimum) get one of the various "package delivery" | "tax refund" | "diesel emissions" scam/spam texts.


Step 1: Stop wasting money subsidising their bullshit.

Step 2: Take a slice of advice from the revolutionary French.


The Crown Estate is where the sovereign grant comes from, and it's weird. It's "owned" by the current monarch, but only in their role as head of state (the Crown). Realistically it belongs to the country and to the public, and were we to abolish monarchy, would not become the private property of the Windsor family.

So while they don't rely on tax money (outside of the huge amount of tax money spent on their police protection, transport in military aircraft, etc. etc.), they do rely on money which rightfully belongs to the people of the nation.


> were we to abolish monarchy, would not become the private property of the Windsor family.

Today, I expect that's what would happen. But I bet the Windsor family could have kept a lot if they had given up monarchy a while back. Which puts the "rightfully belongs to the people" claim in question, if you believe a king can ever legitimately own anything.


Came here with that exact quote in my clipboard. I have a medical background prior to software, and one of the key learnings there is that wherever there is scope for a human to make a mistake, that mistake will (eventually) be made. It's therefore crucial that wherever possible the capability of making that mistake is eliminated. I see automated deployment as falling into a similar category of utility.


Here's a short list of things the current citizens of England didn't do:

- Genocide against the people who lived in America

- Bring India to its knees, from one of the richest to one of the most impoverished nations in the world

- Addict China to opium

- Irish potato famine

Furthermore, the narrative that Scotland is subservient to England, and not equally complicit in the historical crimes of the *union of England and Scotland* is misleading. The 1707 Acts of Union [1] were introduced by both nations.

[1] https://en.wikipedia.org/wiki/Acts_of_Union_1707


If:

- Your father became rich as an inventor, creating wonderful gadgets which improved the world

- My father was a warlord

- My father murdered your father, and stole $1M from him;

- invested it; and as a result

- my family now has $1B

- your family is deeply in poverty

You're poor, uneducated, and hungry. I'm at Harvard, have a trust fund, and in connected political circles. I haven't done anything wrong myself, but I did inherit blood money.

What should happen?

It's not obvious.

Great-great-grandfather?

On the Scotland point, all I can say is that it's more complicated than you present. I did NOT present a narrative of either Scotland as subservient to England, and my exact phrasing was that it has a "mixed history with England." I stand by that. I don't think "equally complicit" is any more accurate than "subservient." The last Scottish independence vote was a 45/55 split.

As for "current citizens," you can look up more recent colonial issues, like the Mau Mau Rebellion.


On point 7 ((Testing an unclear hypothesis), while agreeing with the overall point, I strongly disagree with the examples.

> Bad Hypothesis: Changing the color of the "Proceed to checkout" button will increase purchases.

This is succinct, clear, and is very clear what the variable/measure will be.

> Good hypothesis: User research showed that users are unsure of how to proceed to the checkout page. Changing the button's color will lead to more users noticing it and thus more people will proceed to the checkout page. This will then lead to more purchases.

> User research showed that users are unsure of how to proceed to the checkout page.

Not a hypothesis, but a problem statement. Cut the fluff.

> Changing the button's color will lead to more users noticing it and thus more people will proceed to the checkout page.

This is now two hypotheses.

> This will then lead to more purchases.

Sorry I meant three hypotheses.


* Turns out, folks are seeing the "buy" button just fine. They just aren't smitten with the product. Making "buy" more attention-grabbing gets them to the decision point sooner, so they close the window.

* Turns out, folks see the "buy". Many don't understand why they would want it. Some of those are converted after noticing and reading an explanatory blurb in the lower right. A more prominent "buy" button distracts from that, leading to more "no".

* For some reason, a flashing puke-green "buy" button is less noticable, as evidenced by users closing the window at a much higher rate.

Including untestable reasoning in a chain of hypothesises leads to false confirmation of your clever hunches.


The biggest issue with those three hypotheses is one of them, the noticing the button, almost certainly isn't being tested. But, how the test goes will inform how people think about that hypothesis.


Rate of traffic on the checkout page, divided by overall traffic.

We see a lot of ghosts in A/B testing because we are loosey goosey about our denominators. Mathematicians apparently hate it when we do that.


That doesn't test noticing the button, that tests clicking the button. If the color changes it is possible that fewer people notice it but are more likely to click in a way that increases total traffic. Or more people notice it but are less likely to click in a way that reduces traffic.


Good observation that the noticing doesn’t get tested.

Would there be any benefit from knowing the notice rate though? After all, the intended outcome is increased sales by clicking.


This is what I was driving at in my original comment - the intermediary steps are not of interest (from the POV of the hypothesis/overall experiment), so why mention them at all.


Probably not, but then that hypothesis should not be part of the experiment.


It is surely helpful to have a "mechanism of action" so that you're not just blindly AB testing and falling victim to coincidences like in https://xkcd.com/882/ .

Not sure if people do this, but with a mechanism of action in place you can state a prior belief and turn your AB testing results into actual posteriors instead of frequentist metrics like p-values which are kind of useless.


That xkcd comic highlights the problem with observational (as opposed to controlled) studies. TFA is about A/B testing, i.e. controlled studies. It’s the fact that you (the investigator) is controlling the treatment assignment that allows you to draw causal conclusions. What you happen to believe about the mechanism of action doesn’t matter, at least as far as the outcome of this particular experiment is concerned. Of course, your conjectured mechanism of action is likely to matter for what you decide to investigate next.

Also, frequentism / Bayesianism is orthogonal to causal / correlational interpretations.


I think what kevinwang is getting at, is that if you A/B test with a static version A and enough versions of B, at some point you will get statistically significant results if you repeat it often enough.

Having a control doesn't mean you can't fall victim to this.


You control statistical power and the error rate, and choose to accept a % of false results.


AB tests are still vulnerable to p-hacking-esque things (though usually unintentional). Run enough of them and your p value is gonna come up by chance sometimes.

Observational ones are particularly prone because you can slice and dice the world into near-infinite observation combinations, but people often do that with AB tests too. Shotgun approach, test a bunch of approaches until something works, but if you'd run each of those tests for different significance levels, or for twice as long, or half as long, you could very well see the "working" one fail and a "failing" one work.


The xkcd comic seems more about the multiple comparisons problem (https://en.wikipedia.org/wiki/Multiple_comparisons_problem), which could arise in both an observational or controlled setting.


I don't think these examples are bad. From a clarity standpoint, where you have multiple people looking at your experiments, the first one is quite bad and the second one is much more informative.

Requiring a user problem, proposed solution, and expected outcome for any test is also good discipline.

Maybe it's just getting into pedants with the word "hypothesis" and you would expect the other information elsewhere in the test plan?


the problem is the hand wavy "user research"

if you have done that properly, why ab testing? if you did that improperly, why bother?

ab testing moves from an hypotesis, because ab testing is done to inform a bayesian analysis to identify causes.

if one knows already that the reason is 'button not visible enough' ab testing is almost pointless.

not entirely pointless, because you can still do ab testing to validate that the change is in the right direction, but investing developer time for production quality code and risking business to just validate something one already knows seems crazy compared to just ask a focus group.

when you are unsure about the answer, that's when investing in ab testing to discovery makes the most sense.


> ab testing is almost pointless

Except you can never be certain that the changes made were impactful in the direction you're hoping unless you measure it. Otherwise it's just wishful thinking.


I didn't say anything to the contrary, the quotation is losing all the context.

but if you want to verify hipotesis and control for confounding factor, the ab test needs to be part of a baesyan analysis, if you're doing that, why also pay for the priori research?

by going down the path of user research > production quality release > validation of the hypotesis you are basically paying research twice and paying development once regardless of wether the testing is succesful or not.

it's more efficient to either use bayesian hypotesis + ab testing for research (so pay development once per hypotesis, collect evidence and steer into the direction the evidence points to) or use user research over a set of POCs (pay research once per hypotesis, develop in the direction that research points to)

if your research need validation, you paid for a research you might not need. if you start research knowing the priory (the user doens't see the button) you're not actually doing research, you're just gold plating a hunch, then why pay for research, just skip to the testing phase. if you want to research from the users, you do ab testing, but again, not against a hunch, but against a set of hypotesis, so you can eliminate confounding factors and narrow down the confidence interval.


Having a clearly stated hypothesis and supplying appropriate context separately isn't pedantry. It is semantics, but words result in actions that matter.


As kevinwang has pointed out in slightly different terms: the hypothesis that seems wooly to you seems sharply pointed to others (and vice versa) because explanationless hypotheses ("changing the colour of the button will help") are easily variable (as are the colour of the xkcd jelly beans), while hypotheses that are tied strongly to an explanation are not. You can test an explanationless hypothesis, but that doesn't get you very far, at least in understanding.

As usual here I'm channeling David Deutsch's language and ideas on this, I think mostly from The Beginning of Infinity, which he delightfully and memorably explains using a different context here: https://vid.puffyan.us/watch?v=folTvNDL08A (the yt link if you're impatient: https://youtu.be/watch?v=folTvNDL08A - the part I'm talking about starts at about 9:36, but it's a very tight talk and you should start from the beginning).

Incidentally, one of these TED talks of Deutsch - not sure if this or the earlier one - TED-head Chris Anderson said was his all-time favourite.

plagiarist:

> That doesn't test noticing the button, that tests clicking the button. If the color changes it is possible that fewer people notice it but are more likely to click in a way that increases total traffic.

"Critical rationalists" would first of all say: it does test noticing the button, but tests are a shot at refuting the theory, here by showing no effect. But also, and less commonly understood: even if there is no change in your A/B - an apparently successful refutation of the "people will click more because they'll notice the colour" theory - experimental tests are also fallible, just as everything else.


Will watch the TED talk, thanks for sharing. I come at this from a medical/epidemiological background prior to building software, and no doubt this shapes my view on the language we use around experimentation, so it is interesting to hear different reasoning.


Good to see an open mind! I think most critical rationalists would say that epidemiology is a den of weakly explanatory theories.

Even though I agree, I'm not sure that's 100% epidemiology's fault by any means: it's just a very difficult subject, at least without measurement technology, computational power, and probably (machine or human) learning and theory-building that even now we don't have. But, there must be opportunities here for people making better theories.


Writing an article about developer mistakes is easier than redesigning your rubbish API though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: