Hacker Newsnew | past | comments | ask | show | jobs | submit | marceldegraaf's commentslogin

Counterpoint on the "going all-in": we have a 7 year old Elixir/Phoenix project that currently sits at ~100K LOC and I couldn't be happier.

It has been absolutely wonderful building this with Elixir/Phoenix. Obviously any codebase in any language can become a tangled mess, but in 7 years we have never felt the language or framework were in our way.

On the contrary: I think Elixir (and Phoenix) have enabled us to build things in a simple and elegant way that would have taken more code, more infrastructure, and more maintenance in other languages/frameworks.


I think the OP's point was the job market. I.e. you probably aren't hiring for that role.

Not OP, but I made the move from Ruby/Rails to Elixir years ago, so I'll try to answer from my perspective.

Elixir is a functional programming language based on the "BEAM", the Erlang VM. We'll get back to the BEAM in a moment, but first: the functional programming aspect. That definitely took getting used to. I remember being _very_ confused in the first few weeks. Not because of the syntax (Elixir is quite Ruby-esque) but because of the "flow" of code.

However, when it clicked, it was immediately clear how easy it becomes to write elegant and maintainable code. There is no global state in Elixir, and using macros for meta-programming are generally not encouraged. That means it becomes very easy to reason about a module/function: some data comes in, a function does something with that data, and some data comes out. If you need to do more things to the data, then you chain multiple functions in a "pipe", just like how you chain multiple bash tools on the command line.

The Phoenix framework applies this concept to the web, and it works very well, because if you think about it: a browser opening a web page is just some data coming in (an HTTP GET request), you do something with that data (render a HTML page, fetch something from your database, ...) and you return the result (in this case as an HTTP response). So the flow of a web request, and your controllers in general, becomes very easy to reason about and understand.

Coming back to the BEAM, the Erlang VM was originally written for large scale (as in, country size) telephony systems by Ericsson. The general idea is that everything in the BEAM is a "process", and the BEAM manages processes and their dependencies/relationships for you. So your database connection pool is actually a bunch of BEAM processes. Multi-threading is built-in and doesn't need any setup or configuration. You don't need Redis for caching, you just have a BEAM process that holds some cache in-memory. A websocket connection between a user and your application gets a separate process. Clustering multiple web servers together is built into the BEAM, so you don't need a complex clustering layer.

The nice thing is that Elixir and Phoenix abstract most of this away from you (although it's very easy to work with that lower layer if you want to), but you still get all the benefits of the BEAM.


Something I never quite understood: differentiate between BEAM process and operating system process. The OS has launched one (in theory) BEAM Erlang VM runtime process with N threads; are we saying “process” here to try to emulate the OS process model internally within the BEAM OS process, when really we’re talking about threads? Or a mix of threads and other processes? I’m imagining the latter even cross network, but am I at least on the right track here?

A BEAM process is not an OS thread. The way I understand it, a BEAM process is just a very small memory space with its own heap/stack, and a message system for communication between BEAM processes.

The BEAM itself runs multiple OS threads (it can use all cores of the CPU if so desired), and the BEAM scheduler gives chunks of processing time to each BEAM process.

This gives you parallel processing out of the box, and because of the networking capabilities of the BEAM, also allows you to scale out over multiple machines in a way that's transparent to BEAM processes.


I recently replaced the shock dampers on our Miele washing machine (~10 years old) and I was amazed how well designed and ergonomic the inside of the machine is.

Parts are very easy to get at, all screws are Torx of identical size, and there's one very obvious way to take the machine apart and put it back together again. Made the replacement a breeze.


I think the whole premise of judging a whole country on some random product a company from that country made is rediculous. It's like saying Americans can't develop software because Microsoft screwed up Windows in the last few versions.


Indeed, the parent phrase " German engineering ... their vacuum cleaners" struck me as a bit ridiculous. Perhaps there is a design standard for a company and "their" products, but this was too sweeping.


> I was amazed how well designed and ergonomic the inside of the machine is

Now do an alternator on a VAG car :)


It was a compliment to Miele in particular, not to all of German engineering ever ;-)

Hey, this looks great! I would love to test the Home Assistant version via TestFlight if that's possible; email is in my profile.


Perfect – I will send the link shortly – waiting for the new build to be cleared by Apple.


What’s the best way to get notified once HA is released? This looks like an insta-buy.


Good question – I don't have a mailing list :) You can probably follow the github repo, and I was planning to post it to r/homeassistant on Reddit.


bought pro immediately on just the idea of being able to integrate into HA!


It's funny that you mention this, and it made me take some time to appreciate I've been working with Elixir full-time for almost 10 years now, and the entire experience has been so... stable.

There's been little drama, the language is relatively stable, the community has always been there when you need them but aren't too pushy and flashy. It all feels mature and – in the best possible way – boring, and that is awesome.


Being boring is the hallmark of technology that it is worth to invest a career into.


I've moved from Linux to OpenBSD for this reason.

It's all so boring it's wonderful.


I like OpenBSD, but I like Docker and Steam too much to daily-drive it.


FreeBSD can do OCI containers now!


For me it took a tremendous amount of work to somewhat understand the OTP stuff though. Its one of those languages where I can never be confident about my implementations, and thankfully it has features to check whether you have stale processes or whatever. A language I am humbled by whenever I use it.


I love saying this but OTP is a really roughneck standard library. They just added shit to it as they needed it without apparently putting too much consideration into the organization, naming, or conventions.

It makes it very powerful but very disorienting and experience gained with one part of it often does not really prepare you for other parts. Usually each specific tool was created by someone who used it immediately, so it's all reliable in its way. But there is a lot of redundancy and odd gaps.

Elixir's almost extreme attention to naming, organization, and consistent convention is almost as far as you can get from this approach too. It's fun to have them in the same ecosystem and see that there are actually pros and cons to each approach.


Here's a trick to confidence in a BEAM system. If you get good at hot loading, you significantly reduce the cost of deployment, and you don't need as much pre-push confidence. You can do things like "I think this works, and if it crashes, I'll revert or fix forward right away" that just aren't a good fit for a more common deployment pattern where you build the software, then build a container, then start new instances, then move traffic, etc.

Of course, there are some changes that you need confidence in before you push, but for lots of things, a bit crashy as an intermediate step is acceptable.

As for understanding the OTP stuff, I think you have to be willing to look at their code. Most of it fits into the 'as simple as possible' mold, although there's some places where the use case is complex and it shows in the code, or performance needs trumped simplicity.

There's also a lot of implicitness for interaction between processes. That takes a bit of getting used to, but I try to just mentally model each process in isolation: what does it do when it receives a message, does that make sense, does it need to change; and not worry about the sender at that time. Typically, when every process is individually correct, the whole system is correct; of course, if that always worked, distributed systems would be very boring and they're not.


Erlang's hot reload is a two-edged blade. (Yes yes, everything is a tradeoff but this is on another level.)

Because it's possible to do hot code reloading, and since you can attach a REPL session into a running BEAM process, running 24/7 production Erlang systems - rather counterintuitively - can encourage somewhat questionable practices. It's too easy to hot-patch a live system during firefighting and then forget to retrofit the fix to the source repo. I _know_ that one of the outages in the previous job was caused by missing retrofit patch, post deployment.

The running joke is that there have been some Ericsson switches that could not be power cycled because their only correct state was the one running the network, after dozens of live hot patches over time had accumulated that had not been correctly committed to the repository.


You certainly can forget to push fixes to the source repo. But if you do that enough times, it's not hard to build tools to help you detect it. You can get enough information out of loaded modules to figure out if they match what's supposed to be there.

I had thought there was a way to get the currently loaded object code for a module, but code:get_object_code/1 looks like it pulls from the filesystem. I would think in the situation where you a) don't know what's running, and b) have the OTP team on staff, you could most likely write a new module to at least dump the object code (or something similar), and then spend some time turning that back into source code. But it makes a nice story.

[1] https://www.erlang.org/doc/apps/kernel/code.html#get_object_...


You can run https://www.erlang.org/doc/apps/kernel/code.html#modified_mo... in some process and make it send notifications to your monitoring when anything stays modified for too long.


That's part of it yeah. But, at least in my experience, that tells me you pushed code (to disk) and didn't load it. You could probably just notify at 4 am every day if erlang:modified_modules() /= []; assuming you don't typically do operations overnight. No big deal if you're doing emergency fixes at 4 am, you'll get an extra notification, but you're probably knee deep in notifications, what's one more per node?

But, that's not enough to tell you that the code on disk doesn't match what it's supposed to be. You'd need to have some infrastructure that keeps track of that too. But if you package your code, your package system probably has a check, which you can probably also run at 4 am.


Thank you for this post and I'll add a note for people who are seeing this and are maybe discouraged about learning Erlang/OTP/Elixir.

I generally agree with you that learning Erlang stuff can be daunting.

I will say that many things worth doing are not easy! Erlang and the whole OTP way of thinking is tough to learn in part because it is genuinely different enough from everything else that is out there that one's odds of being familiar with its conceptual underpinnings are low.

If you have trouble learning Erlang (and OTP specifically) it's not because you're dumb, it's because Erlang is different.

Learning Erlang is not like learning any other dynamic language you've learned. Learning Erlang is closer to learning a bespoke operating system designed to build reliable low-latency long-running systems. It's a larger conceptual lift than going from one dynamic OOP language to another dynamic OOP language.


It also took me quite a bit of time to understand OTP. In fact, I had to have a project that actually required what OTP offered to really get it.

Two things that definitely helped me understand were reading the somewhat-dated-but-still-useful material on the topic in Learn You Some Erlang, as well as reading through the "OTP Design Principles" section of the Erlang System Documentation.


Arq (https://www.arqbackup.com/) is a pretty decent backup solution for macOS (and Windows) that lets you bring your own storage. So you can let it back up to Amazon S3/Glacier, Dropbox, your own NAS with ZFS, or one of the other supported destinations.


The Netherlands has very complete and reliable public datasets (provided by the government) that contain loads of information about roads, buildings, up to individual trees. Additionally, there's sites like Netherlands3D[0] that combines these datasets into a 3D representation of the entire country.

0: https://netherlands3d.eu/


very cool! thank you


Sidenote: thanks so much for taking the time to write the Oban docs. I'm a big user (and fan) of Oban, and the docs are fantastic.


VAT is not a "sneaky backdoor tax", it's imposed on all goods, regardless of where they're produced or imported from.

DMA (and similarly, GDPR) are enforced in EU countries just as much. It's just that the US tends to have more gigantic tech companies that do shady things with user data. Apparently the US doesn't care, but the EU actually does, and so it enforces its laws.


If anything is sneaky, it's the way how the in US you never see salestax until you're about to pay :D


We’re talking many 10s of billions in “fines” specifically levied against US tech firms where there is no EU competitor.

I don’t necessarily disagree with all of the laws themselves (some are incompetent EU risk aversion, some are good protections) but given the massive never ending fines being applied in bad faith and constantly moving goalposts it is indeed a defacto tariff on US tech firms.


The fines are not imposed in bad faith, they're imposed for actual, provable violations of the law. Companies who do not violate the law are not fined. Complaining about fines is another way of saying "We'd like to trade in the EU while violating EU laws that every EU company also has to adhere to."


The laws are specifically designed to target US firms without affecting EU ones and enforcement of fines and the size of them is highly selective -- the most attractive targets with the highest willingness to pay without getting to the point where they would pull out of the market.

If you do not see the moral hazard in this, I don't know what else I can tell you. If the EU had a seriously competitive tech industry, many of these laws would have never been created, as the EU is not some moral believer in privacy (they fight against encryption domestically), they are just run-of-the-mill protectionists like all governments.


This is nonsense, I'm about to launch a company in the EU and these laws are a major consideration and potential pain point for us, too. They are very relevant for EU companies.

This makes me wonder if US companies complaining about the GDPR and DMA have any idea how many more laws EU companies have to comply with in addition to this. It's not easy.


If you claim GDPR is "not affecting" EU companies your position has nothing to do with reality.


There hasn't been a single DMA fine against an EU company, ever. Nor have any been investigated.

The DMA is a tax on the United States. Look no further than its enforcement and its text (highly targeted).


Could it be because EU companies based on doing shitty and illegal things just never get started in the first place?


DMA is not the same as GDPR.


The thread is specifically about DMA. My parent comment mentions DMA specifically. This 'EU enforces the law equally' position is nonsense, considering Spotify, an EU company, was carved out from the DMA.

Sounds legit!


You're trying to claim a law that is exclusively used to fleece U.S. companies and never EU competitors is 'not bad faith'?

When has the DMA been used against EU tech companies? Never.

Your comment also shows a fundamental misunderstanding of the DMA and GDPR laws. Neither of them are objective laws, and they are applied subjectively without guidance.

Let me be very clear: the EU does not tell you how to comply with either the DMA or the GDPR, period. The law is extremely vague and does not prescribe how to comply in any way, shape or form.


DMA has not been used against EU tech companies because US tech companies are clearly the market leaders in the area the DMA is concerned with. The DMA exists to make sure that companies (from the EU, US, or elsewhere) comply with EU regulations regarding privacy, tracking, and consumer rights.

It's not a "tax" on US companies, it's just that US companies don't bother to comply with the regulations that apply in the EU, and thus get fined.


>US tech companies are clearly the market leaders in the area the DMA is concerned with.

There's a good argument that this is targeted. Why didn't this regulation affect SAP? Their market position gives them leverage over a massive number of companies.

>it's just that US companies don't bother to comply with the regulations that apply in the EU, and thus get fined.

It's not that they "don't bother", it's that they understand complying with the regulation to cost them more than the fine. In other words, the regulation itself is a sort of fine, or tax imposed by the EU, with a magnitude of roughly equal proportion to the fines it imposes.


No offense, but this is a silly argument. Companies in country X tend to develop their products in conformance with country X. Of course, products developed in the EU will conform with EU law. By the same token, I would be surprised if US companies habitually developed products that don't conform with US law.

> It's not that they "don't bother", it's that they understand complying with the regulation to cost them more than the fine.

This means that the fines are not high enough and don't fulfill their purpose. That's an argument for the thesis that the EU is handling fines of violators in a too lax fashion, not the opposite. This has also been the impression of many EU citizens, and it seems to be the reason why so many huge US corporations keep violating EU customer protection rules again and again.

But the reality is also that US companies that violated those rules basically have no EU competition because the EU has an abysmal market in certain tech domains. There simply are no viable EU equivalents to Apple, Google, Facebook, and Microsoft.


>Companies in country X tend to develop their products in conformance with country X

You have the order wrong. The companies came first, then came the laws. So we might reverse this statement to: "countries with company X in them tend to develop their laws so that company X is in conformance with those laws". This latter statement seems likely enough to be true, and is exactly the point of order in this discussion.

>By the same token, I would be surprised if US companies habitually developed products that don't conform with US law.

It's called "growth hacking". Uber was quite famous for it. The only time you'd benefit from breaking the law in a foreign country vs. your own country is if you intend to exit the market of that country; you don't have to worry about paying fines if the country can't reach you. If the intention is to continue doing business there, then any punishment will have to be borne just as if you were headquartered there.

>This means that the fines are not high enough and don't fulfill their purpose.

You're missing the point. The laws scale so that eventually they will be high enough that the company has to conform. The point I'm making is that a company's willingness to break a law shows that the law is costing them money, and we can even estimate how much money it costs them by the size of the fine. If we assume that all laws are fair and just then this just means that the company is evil. However, as we showed above, some laws are unjust, hence them costing a company money can be a way of unfairly extracting money from those companies.


At least as far as I'm concerned, there is no need to further discuss your "laws are made for companies" conjecture. I don't find it plausible for various reasons. Anyway, good luck in your future endeavors!


Explain why Spotify got a carve-out from the DMA despite being an effective monopoly gatekeeper.

Is it because it's an EU company and the DMA is a tax on the United States?

'The law that applies only to US companies is applied equally and fair!'


This argument would be just as valid if the US was the world leader in assassination markets: shitty and illegal practices are shitty and illegal, regardless of whether they were firmly established with significant markets in other countries first.


Unless you're EU company Spotify, who got a carve-out from the DMA despite being a monopoly gatekeeper :)


> provable violations of the law

If you ever tried reading GDPR or DMA... you will realize pretty quickly that there is little meaning in them.

I am totally unsure someone can prove a DMA violation. It's simpler with GDPR because a lot of concepts from it have been already somehow interpreted and agreed upon. But we do not have case law in EU, so I guess even known GDPR violations are often dubious.


DMA is applied equally, you say. How interesting! Can you link me to the examples of the EU going after EU companies for DMA violations? I couldn't find a single one. Not a single case, ever.

The EU wanted to fine Google $35,000,000,000 under DMA. That's a backdoor tax. No European tech company faces this scrutiny. Never have, never will -- because the DMA is a tax on the United States.

It's also interesting that the Google and Meta DMA fines are expected to land in the next week. What a timing coincidence, almost like it's retaliatory (as many articles have suggested).


> Can you link me to the examples of the EU going after EU companies for DMA violations?

https://ec.europa.eu/commission/presscorner/detail/en/ip_24_...


So they haven't gone after a single EU company and the ONLY court cases or investigations on DMA were specifically US companies?


Maybe the companies from the EU just didn’t violate the law? How does enforcement prove that it’s a tax?


Best decision of last year for my homelab: run everything in Proxmox VMs/containers and back up to a separate Proxmox Backup Server instance.

Fully automated, incremental, verified backups, and restoring is one click of a button.


Yes, I'm considering that if I can't find a solution that is plug-and-play for containers Independent of the OS and file system. Although I don't mind something abstracting on top of ZFS. ZFS Mental overhead through the snapshot paradigm can lead to its own complexities. A traditional backup and restorer front end would be great.

I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: