Hacker Newsnew | past | comments | ask | show | jobs | submit | vpfaulkner's commentslogin

Blackbird | Dev Shop | Remote (US or Europe) | Contract

Blackbird builds web + mobile apps for startups and innovative companies. We work on interesting, greenfield projects, and rely on small, veteran teams who have lots of autonomy. Everyone is a remote contractor, and our work culture is low-drama and highly collaborative. No pointless meetings, no bureaucracy, no egos and lots of time spent building things.

We’re looking for senior full stack engineers / CTO consultants / data scientists. We value engineers who are strong technically as well as great communicators. We work with lots of freelancers who appreciate the steady, interesting work.

Our typical stack is TypeScript, React, Postgres, and AWS although it will vary project to project.

If you enjoy moving fast, working with great engineers, and building something from the ground up, you’ll fit right in.

If interested, reach out directly to me at vance [at] blackbird.us


Hi, will it be ok for me to apply if i am ok with Europe timezone but not actually in Europe?

Below is my portfolio https://abdullateef-portfolio-rho.vercel.app/


Is it fine if I'm in European timezone but not in Europe?


Love your landing page!


I like the analogy that code is like weight to an airplane (“Measuring programming progress by lines of code is like measuring aircraft building progress by weight” - Bill Gates). You want enough code to achieve your business objectives but no more since code carries a maintenance cost.

I suppose you could say the net value of some code = contribution to business objective - maintenance cost.

This also implies that code could have a negative value depending on its business value and maintenance cost


The labor force participation rate ("percentage of the population that is either working or actively looking for work") has been declining for 20 years and had a big drop at the beginning of the pandemic. Consequently, part of the low unemployment trend has been driven by people dropping out of the workforce/not looking for a job.

https://fred.stlouisfed.org/series/CIVPART


Many people think of the labor participation rate is just the % of working among the adults working age population who could be employed if good jobs were available to them.

In reality, the divisor includes college students, the highly disabled, happy housewives/househusbands, trust fund brats and, most importantly, the elderly/retired. Essentially everyone age 16 and over. Not 19, not 21, 16. I.e. it includes juniors and seniors in HS and college students.

The demographic shifts in the US means the labor force participation rate will, by necessity, decline.


And some states allow kids to start working as young as 14, so you can have part time jobs that just entirely disappear if the kid gets laid off / fired / quits, because they don't appear in the employable column.


Are there any statistics on how many people actually are the beneficiaries of a trust fund of any sizable amount, and what percentage of those people are actually legitimate brats?


I have relatives that haven't worked in years. They don't have a trust fund. Instead, they get put on the payroll of a highly profitable family business with a "fake" (but well paying) job. I doubt you will find anyone who will admit to these arrangements openly.


That measure will reflect changing demographics and, specific to our current circumstances, an increase in retirements during covid. Prime age participation is more useful for projecting forward https://fred.stlouisfed.org/series/LNS11300060


The LFPR for people under 50 is significantly higher than it was prior to the pandemic.


From a layman’s point of view, it seems like we grasping at straws when it comes to these thorny quantum questions. Is fair to say, for example, that we are about as clueless as our ancestors were with the bubonic plague?

To a non-expert it can be difficult to separate which theories lay on solid ground and which theories are highly speculative.


QM is hard to visualise, but we have extremely sophisticated equations and principles for determining how quantum systems will evolve, and can engineer complex functioning systems using that knowledge. For example transistors only work because we understand QM well enough to precisely engineer the energy level state behaviour of electrons in semiconductors.

It would be like accidentally discovering antibiotics during the plague. You might not know how it works or why, but you know what it does and it absolutely gets the job done.


That applies to a lot of medicine right now - we don't know how Tylenol or antidepressants work, and Semmelweiss's invention of handwashing was rejected because he couldn't explain why it was working.


That would not be fair, though it's understandable why a layman might feel that way. The fact is, most physicists don't particularly feel the need to have an explanation for that kind of thing. We have the math, and most people agree on how to use it to make very accurate predictions. Collapse was always a little silly, but there are other possibilities as to why you'd get that kind of effect just from wavefunctions.


"It's only a model" Patsy says, but then they cut to a whole big song and dance routine, so it must be a pretty good model. Or course it is unsurprising that Monty Python's Holy Grail would provide deep physics insights, they were a pretty clever bunch.


> which theories are highly speculative.

So QM itself is on very, very solid ground. You're using it now on your computer.

The interpretations of QM and the attempts to reconcile the exceptionally well tested mathematics of QM and the reality that we experience which is not-QM at all are all philosophical with zero evidence. Everyone just tries to make compelling arguments based on things like Occam's razor about why their horse is the best one in the race without actually knowing anything at all.

We have place a few bounds around things like Bell's inequality so we know that local hidden variable theories are ruled out, but that is about it.

The title article is very interesting because its one of the first few actual tests to probe if there really is a transition between QM reality and classical reality. Regardless of who actually wins the horse-race the important thing here is that there's slow progress being made on trying to experimentally test theories. This is why I've always liked the Penrose models of collapse better than the MWI models since the former have some chance of being actually testable, while with MWI you just blindly decide it is true or not and then you argue a bunch about philosophy and never do any experiments, which isn't science. Penrose models of collapse might be wrong but at least they're in principle testable, which is incredibly exciting about this article.


Many worlds is absolutely testable, since if we observe collapse in even a single one of these experiments then that completely falsifies many worlds. If one of these experiments discussed in the article had actually observed a collapse, then I have no doubt we'd be seeing headlines like "many worlds theory disproven", and Nobel prizes for the physicists involved. It would be the biggest discovery in physics for decades.


We "observe" collapse all the time, and can calculate the probabilities of the different possible collapsed states of a not-yet-observed superposed or entangled state using the Born rule. What we don't know, and can't tell, is whether we have seen an "objective collapse" (we live in just one universe, that undergoes a discontinuous change at the time of measurement) or in something like MWI. That is question for philosophers afaik. The experiment in the title falsifies certain models of objective collapse, but others are harder to falsify.


I agree that's it's possible to create objective collapse theories that are arbitrarily difficult to falsify, but the difference between objective collapse and regular old decoherence due to interaction with the environment has experimentally measurable implications. In particular, if you expect a certain probability of objective collapse in a certain period of time, then do an experiment with a coherence time longer than that, while keeping your system carefully isolated from interaction with the environment. Then if the wavefunction collapses anyway, that would prove objective collapse and disprove many worlds.


I thought an objective collapse theory was one where the observation (whatever that is) causes the collapse. An observation is necessarily an interaction with the system, where the observer is part of the environment. So if the system is that isolated from the environment, the collapse or lack of it can't be observed. I didn't think it meant the collapse happens after some amount of time like radioactive decay, even without an observation (interaction). Maybe I'm wrong.


You don't get there from WMI though. You get there from trying to prove collapse happens and testing some other theories predictions.


> Is fair to say, for example, that we are about as clueless as our ancestors were with the bubonic plague?

“We” in the collective sense are not clueless about quantum mechanics at all. We have an extremely exact model, and pretty much every attempt At proving that it’s insufficient fails. The only “problem” with quantum mechanics is the same as the “problem” we had in Newtonian mechanics when we found that a tennis ball and a bowling ball falling from head height hit the ground at the same time. This was a problem because intuition would have the heavier object fall faster than the lighter. It wasn’t an actual problem with Newtonian mechanics though mind you. Our intuition was counter to reality, the problem was the intuition not the model of reality.

Naturally there were other actual problems with Newtonian mechanics, but none of that had to do with making it more intuitive. And the same is true of quantum mechanics. It doesn’t sound intuitive to most people, but that’s not problem, it just means your intuition about how systems should behave at these scales is wrong.


> From a layman’s point of view, it seems like we grasping at straws when it comes to these thorny quantum questions. Is fair to say, for example, that we are about as clueless as our ancestors were with the bubonic plague?

> To a non-expert it can be difficult to separate which theories lay on solid ground and which theories are highly speculative.

Sure, but isn't that the point of doing these experiments?


I'm all for trying to understand these phenomena and running these experiments. Just trying to get a sense for how much of a grasp we have on these phenomena.

Based on the other responses, it seems like we can mathematically model these phenomena very well and make very good predictions. However, when it comes to explaining why these phenomena exist in the first place, we are like a medieval doctor trying to explain why antibiotics work.


It's like we had miasma theory of disease, but miasma theory was actually highly accurate at predicting disease.

So... not really like how our past misunderstanding of disease, which was useless as well as being wrong.


I've had a good experience using AWS SQS. It's fairly straightforward and can be paired with SNS if more complex workflows are necessary. I recently migrated to SQS from Rabbit MQ since we didn't need all of the features of Rabbit MQ and wanted something simpler.


I honestly wonder how much the typical user actually shares the concerns brought up here and in other hostile web arguments. Most people I know care much less than the Hacker News community about privacy, bloated web sites, web advertisements, etc. and would gladly trade these for free software such as Facebook, Gmail. I'm not saying I agree with them, but I recognize that reality.

If that is the case, then I feel less comfortable with the idea that this is simply evil corporations commoditizing users.

Rather, I think the more nuanced explanation takes into account the fact that the web's users today (now at 5 billion people) and the people shaping its content (eg marketers, designers, business people, etc.) simply have a very different set of values than those active at the web’s infancy.

In other words, perhaps it has become democratized...it's just that that means it looks very different than you would prefer it.


Most people don't have enough knowledge to form a relevant opinion on the things that matter to them, and to be fair most people really don't have the time to become experts in everything. For example, before most people understood the harmful effects of smoking, only the people who actually looked at health data were concerned, and their voices were also drowned out by the people who stood to gain enormous amounts of money by having a population addicted to cigarettes. The average person was probably happy to smoke, it makes them look younger after all like the ads they were exposed to tried to demonstrate.

I think in time social media and hostile web will be seen as a similar public health concern. It's already recognized in the data the ill effects on mental health that these dark patterns have on people, and the unprecedented power of manipulation made possible by social networks. It's hard right now because Facebook et al today are what Phillip Morris et al were then: powerful beyond belief due to extreme profit, lobbying, and regulatory capture.

I'm an optimist, so I hope eventually we have our surgeon general moment for social media and the hostile web, and expert voices are heard over those of the monied interests who lobby solely for the continuous growth of their cash cow.


I agree that, with time, I think/hope many people will realize social media actually has a lot of adverse effects.

However, to say, "most people will come to the same conclusion I have given enough time and evidence" seems a little naïve, and overlooks the spectrum of preferences that people have for the various things developers consider "hostile."

For example, while a lot of developers (myself included) love the minimalism of, say, Hacker News and hate the bloat of, say, nytimes.com, I think it's unfair to call nytimes "hostile" to the user. Frankly, it doesn't grate the typical user that much and the decisions made by NY Times designers/developers reflect that reality.

I think a similar argument can be made about privacy. Some people would pay for a product that is private and without advertisements; others are fine with the tradeoff.


> surgeon general moment

I get the impression that the GDPR is ushering in the surgeon general moment that you mention. It's slowly starting to improve the health of the web. The latest rulings may even finally kill the cookie banner. Hallelujah.


I know memes aren't welcome here, but I think this one sums it up perfectly: https://i.imgur.com/0fNLxYi.png


> Most people I know care much less than the Hacker News community about privacy, bloated web sites, web advertisements, etc. and would gladly trade these for free software such as Facebook, Gmail. I'm not saying I agree with them, but I recognize that reality.

"Gladly" here seems like a stretch. No one is happy about slow ad filled webpages that make it hard to read the content they want, they aren't clamoring for pervasive tracking, or unnecessary restrictions. Most people hate those things, but they aren't aware of a reasonable alternative offering what they actually do want and are glad for (reliable email service, pictures of their grand kids, etc) so they tolerate the horrible abuses forced onto them and are grateful for what they can get.

The main difference I see is that most people online don't know that things could be any different. Many don't even realize how much more enjoyable their online experience would be if they simply installed an ad blocker (and used a browser that allowed them). To this day I run into people who honestly think that without pervasive advertising and surveillance capitalism the internet couldn't exist and no one would ever bother to create content for it. Folks who saw the internet in its early days know better, and can see what things have improved and what's gotten worse and how much potential hasn't been realized.

A dog raised eating nothing but table scraps is never going to care as much as someone who is forced to eat them after once having a seat at the dinner table.


Consider all of the "free with ads or paid without ads" products such as YouTube. It's pretty clear that a substantial number of people understand these tradeoffs and are willing to accept them as I imagine the percent of paid subscribers is miniscule.

Obviously people would prefer, for example, a YouTube that is free, without advertisements, private, etc. But, my point is that developers tend to have different preferences than the general population and, thus, we shouldn't be surprised that the Internet has evolved in a way that reflects those differences.


I agree that using error monitoring service like Sentry, Rollbar, Bugsnag, etc. is better than a naked log for "exceptional" situations. They provide a lot of nice features including alerting, additional debugging info, the ability to mark bugs as fixed, etc.

I also agree that too many log statements can actually be counterproductive by providing a lot of noise.

But the idea that logs have no value is silly. Every developer writes bugs and will run into situations where the code is behaving differently than they thought in production. Being able to follow the logs to understand what happened is extremely valuable.


I agree that “separating concerns” is generally a good thing.

However, the issue is that the traditional division of concerns is more difficult to maintain in today’s web apps. Compared with web pages 20 years ago, web apps today are dense, interactive and complex. You might have dozens of UI components in a single page, each with their own piece of state, business logic and styling. Moreover, state, business logic, and presentation are oftentimes tightly coupled by design: eg, dragging this slider changes its shading using a complex algorithm.

Therefore, it’s becoming more advantageous to decouple individual ui components, each with their own state/logic/styling, than it is to, say, stick all of the state your web app deals with in a single place.

In other words, it makes sense to encapsulate all that code related to that crazy slider in one place, even if that includes state, styling, algorithm, etc…


A lot of these issues seem difficult to regulate but one that seems more realistic is usage by minors.

What if social media platforms required all minors to have their account associated with a parent account? The parent could monitor activity, institute time limits, etc.


Minors don't use FB much anyway. It's more tik-tok now. And of course no minor would use an app monitored by her parents, she will immediately switch to another app.


Sorry, should have clarified: I was suggesting that if the government decides to regulate it should apply to all social media platforms, not just FB. Updated the original comment.


Unless you're also requiring IDs for age verification, then that just means that the kids will lie.


My gripes with traditional CSS styling are:

- Styles are global

- Styles are targeted via brittle, untyped, and opaque "magic strings” basically. This means mistakes are more likely to be caught at run time than compile time. Eg, I wouldn't get a compile time error if I did `position: oops` or `class="oops"`.

- Styles are often "far away" from their target which makes mistakes more likely; ie this deeply nested HTML element in one file is coupled to a deeply nested style sheet in another file

- It is easier to perform complex manipulation of styling if it is made up of JS objects. Eg, if I wanted to do math or I wanted one style to be a function of another (eg `marginLeft: PAGE_MARGIN`)

That being said, I’m sure there are some better ways of doing traditional CSS since I last tried it that I’m unaware of...

As far as the performance trade off, I'd love it with styled components did not come with this but, at least for my use case, it is usually worth it


"Styles are global" is changing :)

https://youtu.be/jUQ2-C5ZNRc?t=683 Looks like the spec is at https://drafts.csswg.org/css-scoping-1/ with examples like https://drafts.csswg.org/css-scoping-1/#example-f1503361 and more details at https://css.oddbird.net/scope/

Still experimental of course, but could be very useful if implemented alongside Shadow DOM. Note this isn't 2012's scoped CSS: https://caniuse.com/style-scoped

If we get this the only thing I'll continue to dislike about Web Components would be the global namespace of web components and that Web components HTML attributes can't be rich objects or arrays natively. (I'd love it if web browsers simply implemented JSX, for example, or a syntax that looks like DOM but is actually a function call that can produce DOM. Bonus points if they also implement Markdown or MDX as an HTML element of some kind.)


CSS scoping without a hacky third party framework is long overdue.

Web components largely failed at this point though, the spec needs to be rewritten from scratch to be more pragmatic. People have been using React and Co for more than a decade now, spec writers certainly have enough hindsight as to how people practically program front-end applications.


People have been using XBL in Mozilla before all the web components work started, so… I wouldn't hold my breath.

Looking it up, archive.org says it's been over 20 years… https://web.archive.org/web/20001218094100/https://www.mozil...


Please give me web-components without Javascript and I'll start using them heavily.


well, it was kind of the whole point of XHTML2 spec. The Web component spec is just an inferior version of it, that's why it failed.


Web components are a great brand for a collection of mostly awful APIs. Shadow DOM in particular should have just been a CSS property from the start. Glad to see someone starting to recognize this.


Never-mind, read the link. It's just shoveling more crap on top of the existing shadow DOM instead of finally just cutting bait and making a CSS property called `inherit: reset;` like they should have done to begin with.


CSS modules with TypeScript solves the first three items pretty neatly. You write CSS that lives alongside tour component, then import the file into your component and apply the classname to your elements. It compiles to unique classnames that cannot conflict with any others. And with TS, you’ll get a compile error if you use a class that doesn’t exist. The last point is possible either with CSS vars or Sass vars. calc() can handle some pretty complicated math. You can also do plenty of math in Sass.


Are there any good examples or tutorials for using CSS modules and TypeScript as you described?



I don't know of any full tutorials that cover both, but this is an ok rundown on how to use them in JS: https://css-tricks.com/css-modules-part-1-need/

This is the webpack loader that generates type def files: https://github.com/seek-oss/css-modules-typescript-loader


I've really enjoyed tailwind after using styled for a while.

At least for my applications, the computers my users run on can't handle the performance implications of styled.

But beyond the performance, I legitimately build faster using tailwind. I also find it easier to understand the components others build as well.


We’re making the switch at the moment and, after a bit of a learning bump, everyone is flying along with it now. It’s basically inline-styles++ with having all the context right there but not having the edge cases that require breaking out into classes to use media queries etc.

It was a little struggle at first with our previous setup, but I spent a week or so moving everything over to Vite with hot module replacement which has been life changing.


Completely agree.

This whole discussion feels silly after a day building things with Tailwind. The system, defaults, docs and tooling are excellent.

And dev speed is ludicrous.


Fully agree. It's usually the people that have never worked with it that share their opinion, always ending with "why do we need this"

Can encourage others to give it a try as well. Dev productivity through the roof indeed :)


Comparing Tailwind with Web Components might be a bit unfair. It is well known that Web Components have terrible dev ergonomics.

It would be more fair to compare with frameworks that work with scoped styles (such as Svelte or Vue’s single file components). Developing in these is also an excellent productivity boost.


Can anyone give some references on that Web Components have bad dev ergonomics? Is it just that Web Components don't allow passing of complex types so you have to stringify everything?


I see many people have problem with that in the comments here. However I don’t find that to be an issue. Passing properties (as opposed to attributes) allows complex types and many popular frameworks have a handy shorthand for it (e.g. in lit-html you write `<my-component .prop=${[1, 2, 3]}>`).

In my opinion the bad ergonomics is actually when authoring the components (not consuming it). Many of the faults have been excellently pointed out previously[1]. The most glaring the boilerplate you have to write if you want your attributes to reflect a property.

This has an easy fix though, which is that you simply don’t write your web components by hand. You either use a library (like lit-element[2]) or a compiler (like stencil[3]). I’m personally waiting for a less opinionated compiler with a smaller runtime then stencil (preferably no runtime; perhaps that is Svelte with a web component target, I haven’t tried it).

There is also a proposal for a declarative shadow DOM[4], which aims to tackle some of the bad dev experience we have with web components. However I’m personally a little skeptic that it is a good proposal, or that it will fix what most web component devs are concerned about.

1: https://dev.to/richharris/why-i-don-t-use-web-components-2ci...

2: https://github.com/lit/lit-element

3: https://stenciljs.com/

4: https://github.com/whatwg/dom/issues/831


Thanks for the response, I appreciate it!


It is not well known that web components have terrible dev ex, because it isn't true.


Trying to get my team to consider the switch from Bootstrap to Tailwind. All of this talk makes me jealous


Nothing beats a live demo.


Haven't heard of this until now. Looks like a cleaner version of Bootstrap?


It's kind of like bootstrap except it's more focused on the developer building styles instead of having a number of prebuilt styles.

It also is more based on what css can do instead of having it's own abstraction. Most of the class names in Tailwind are very close to their css key+value equivalents. For example if you wanted to write `float: right` in css you'd use the Tailwind class `float-right`.


>"- Styles are global

- Styles are targeted via brittle, untyped, and opaque 'magic strings' basically.

- Styles are often 'far away' from their target which makes mistakes more likely; ie this deeply nested HTML element in one file is coupled to a deeply nested style sheet in another file"

You can get a long way to solving those three issues with good organization, but only if you control your whole project, stick to the plan, and don't have any libraries.

But over the course of a long lived project CSS is a real foot-gun that even well meaning developers will shoot themselves with. It's hard to unweave a tightly woven CSS nest, and once you introduce third party libraries and custom CSS things start to get wild. Who hasn't seen a "custom.css" overriding the "global.css" which was itself just a bandaid to fix an issue with some third party library.

I use a method I called "Contexts and Components". It starts with a reset. 98% of the CSS is inside the component files, and target component classes, eg <div class="product-card">. But of course things can change depending on what page they're on, and those are called contexts. Maybe a product card on the home-page has a border or whatever. That gives you a way to address client requests for things to differ but without making crazy complicated configurable components.

That way things are really shallow, only one level deep unless they are modified by a context, which makes it two levels deep. I find I need way less CSS, no crazy hard to reason about selectors, and it's all very easy to understand. Best of all, it's easy to remove stuff entirely, so you don't end up with overrides over overrides.


I keep going back and forth on this one. Right now I believe it's best to not fight against CSS and use it as it's meant to be used.

I try to focus on writing nice HTML often multiple relevant class names. Then later make things look the way I want in CSS.

I find it hard to think in CSS in translate that to HTML but a lot of developers I know tend to work that way.


Complex manipulation can easily be done with a preprocessor. You can work around the other issues you mention by just, well, learning css instead of fighting against it.


You can address points 1 and 3 via (s)css modules.

But point 2 and 4 stand.


You can also kind of address 4 by using `calc` and custom properties in CSS.

(eg: `margin-left: var(--page-margin);`)


We started using CSS in JS a few years ago because we couldn’t, and still are unable to use CSS variables (some of our customers still require IE11 support and we can’t drop it until 2022).

The main issue I still see with CSS variables is IDE support and static validation. With my CSS written in typescript I know that every variable I’ve imported works statically.


Point 2 is solved if you add TS to the css modules.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: