Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When you can't compile because of one unused variable that was commented out . . .

You smile in delighted satisfaction, knowing that the compiler will protect you and your team against this all too common form of bit rot — a protection that, until now, had to be enforced merely by convention and review.



> a protection that, until now, had to be enforced merely by convention and review.

It is enforced by linters and almost all IDEs. Compare to how how the following golang code drops errors silently, and is not caught by the compiler:

    foo, err := getFoo()
    err = processFoo(foo)
    if err != nil { ... }


Wait... what are you expecting here?


The first error is not “handled” but is overwritten


A compilation error, the first `err` is never used.


> You smile in delighted satisfaction, knowing that the compiler will protect you and your team against this all too common form of bit rot

The one which is purely aesthetic, has no actual relevance, and can trivially be checked for if you want to?

> a protection that, until now, had to be enforced merely by convention and review.

`-Werror -Wunused` is not a convention in any sense of the term.


Not true. In gcc you have -Wunused-... if you want to be warned.


I don't want to be warned, I want to be stopped, and, more importantly, I want everyone else to be stopped, too, unconditionally. -Wxxx doesn't get me there.


For a little while I used to compile with -Werror but I found it very annoying. I very often want to make a quick test commenting out a function call or doing an early return that would trigger a warning for dead code and prevent me from continuing.

It might be a good idea to add it to your build bot though, this way you can enforce it for commited code. That's a good compromise IMO.


You use linters to catch those issues (and more). Any quality project will have linters, CI/CD etc. set up. And it's not like golang obviates the need for linters, if anything, it needs them more than other languages to deal with several issues, including (ironically) accidentally ignoring handling errors.


Is ignoring errors really a problem in practice? I’ve been using this language since before 1.0 and I don’t think I’ve ever been burned by this. I’m sure it happens once in a while (albeit at very low frequency), but people in this thread act like you can’t build software in Go because the compiler doesn’t catch every ignored error.


It gets in your way when you're debugging or writing quick scripts.


Yes, it does: -Werror=unused-variable


Presumably by "unconditional" they mean not conditioned on you using a flag.


There's another switch that treats all warnings as errors and therefore bails out.


You could enforce -Werror for all debug builds, but that might be too draconian.


isn't that what go does?

after all there's no way to ignore unused variables with a compiler option


And that's part of the reason why some people will bever use go or other languages that add purely stylistical restrictions on you


There’s a huge difference between baking a cake for your kids and running a cake factory.

Enforcing that unused variable rule (and, in some sense the standardized code layout), for me, is sort-of assuming all code that gets compiled will be deployed at scale in a mission-critical context.

A language that also wants to cater to hobbyists should, IMO, be a bit more liberal in what it accepts, and provide flags to enforce additional rules for those who know they need them.

(I also think go could go further in enforcing quality. Code that does in, err := os.Open(path), where in doesn’t escape, should call defer(in.Close). The compiler could enforce that)


Go is primarily a language for programming in the large. It's appropriate for it to make design decisions that reflect that priority.


> Go is primarily a language for programming in the large

Other than the claims they put up on their blogs and presentations, what hard evidence do we have that this is actually the case ? Java and C# are much more suited for "programming in the large" than golang.


I'm speaking about the language's principles of design, not its market efficacy. For this, there is no higher standard of evidence than the claims of the designers.

> Java and C# are much more suited for "programming in the large" than golang.

I don't agree. One point would be Go's faster compile times.


The standard of evidence is what I see when I use the language at work. And so far, I don't see much that validates the golang designers' claims to be honest.

> One point would be Go's faster compile times

I'm looking for more substantive points, as frankly, this doesn't cut it. I just posted this in a different comment on this thread:

> Not much faster than C# or Java in practice, especially when using a build system that caches your build so you don't have to rebuild everything every time. The difference actually gets smaller the larger the program is. Furthermore, it's (unit/functional/integration) tests that usually take the longer time to run anyway, which you have to execute before submitting your diffs (and caching helps there as well).

I worked at an employer that heavily uses golang, and I'll tell you that build times for actual large scale projects (pulling many dependencies) were not much better (if at all) than Java's.


> The standard of evidence is what I see when I use the language at work. And so far, I don't see much that validates the golang designers' claims to be honest.

Either we're talking past each other or you're being deliberately obtuse.

The claim is that Go was designed to be a language in service to software engineering, for programming in the large. The evidence for this claim is that the designers claim it, that's totally sufficient. The fact that your experience of the language doesn't sync with this claim is irrelevant, because it has no bearing on the design of the thing. If you want to claim that the designers have failed at this goal, that's a totally separate argument.


> The evidence for this claim is that the designers claim it, that's totally sufficient

My argument is exactly that, their claim is unsubstantiated, and people's experiences in practice contradict their claims. Not to mention that some of their design decisions are objectively bad, and go against their claim in the first place: https://github.com/golang/go/issues/16474


If I say I designed X with principal Y in mind, that claim cannot be contradicted or considered unsubstantiated by other people's experience of using X. It's a statement of intent, not of consequence.

> Not to mention that some of their design decisions are objectively bad

Oh, you're just grinding axes. I'm sorry for engaging.


I'd say you weren't exactly engaging in good faith, either. It's pretty clear that their point could be summarised as something like: Go's designers tried or intended to design the language for programming in the large, but they failed (perhaps partially), so therefore Go isn't (after the fact) designed for programming in the large.

You can argue about whether they really failed or not, but your pedantic take on it wasn't helpful for a discussion.


> build times for actual large scale projects (pulling many dependencies) were not much better (if at all) than Java's.

My experience is radically different; I don't believe you.


That's only because languages up until now had it backwards: warning by default instead of erroring by default.

The proper design would fail with an error, unless you added a switch to warn instead.


That does not work, because warning are generally based on heuristics, may have false positives, and are not consistent between different implementations and versions of compilers.

Therefore, fail for error is acceptable for internal development, but not for released code.

If i release a project, i would not want users to fail to compile it just because they have a slightly different compiler version.


Actually, it does work, because languages tend to provide ways to notify the compiler that what you're doing doesn't warrant a warning. I've worked on big C codebases, some of which I built and some of which I inherited, and I was always able to get it to the point where it would compile with -Werror -Wall -Wextra etc, many times even -pedantic, with very few (if any) pragmas. The only exception to this was compiling for MSVC, which is no surprise since they don't really care about C.

When I use an external library in my project, a part of me dies inside if I see a stream of warnings. So I understand where Pike, Thompson et al are coming from. They're just misguided in their approach.

The trouble with C is that there's no "official" compiler like modern languages have, so there's no consensus on what merits a warning or not (and the spec is so terrible that you'd have a hard time coming up with them anyway). But for modern languages, you have a clean slate to work from, which means that we have an opportunity to do it right this time.


Practically any large scale project is going to have linters involved at some point. This seems to be a much better suited task for the linter to catch than to be set at the compiler level, since it severely hinders programming locally.


Can't your coworkers just do the simple thing, and Comment Out unused imports? Where is your Zen now...

For every Virtue, there are multiple interpretations. What's simple footwear: flip-flops, Birkenstocks, or Redwings? 5 Fingers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: