Hacker Newsnew | past | comments | ask | show | jobs | submit | more parley's commentslogin

I've been porting some JSON/HTTP API servers from Go to Rust recently, and it's been a really nice experience. Rust has convenient struct serialization just like Go, and Hyper is coming along as a nice (relatively) low-level HTTP library. If you want a higher abstraction HTTP server, go with Iron that builds on Hyper.

The I/O libraries have more work to be done (mostly for the async story), but they're ok for now and the plans I can see look promising. I see a bright future for those of us who like to write robust and high performing API servers, so don't shy away from Rust in that domain!

EDIT: Forgot to say that everyone's very friendly in #rust-webdev where Hyper/Iron authors and others can be found.


It is an interesting question and one that's quite debatable. I enjoyed the discussion in [0] the panel "Systems Programming in 2014 and Beyond" from 2014 with panel members Bjarne Stroustrup (C++), Niko Matsakis (Rust), Andrei Alexandrescu (D) and Rob Pike (Go).

Have to say I agree with Bjarne and Niko on most points discussed.

[0] https://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Pa...


It probably wasn't the first occurrance, but I have a vague memory of hearing it formulated like that first from Niko Matsakis in a Rust video on Air Mozilla.


Steve: Perusing or pursuing the trifecta? =o)


I always friggin' do this, it's one of those typos I perpetually make :(

Care to make a PR so you get a commit credit, or should I just fix it? rust-lang/blog.rust-lang.org is the repo.


It's fine, go ahead and fix it. And thanks for your hard work on Rust! =o)


Done: https://github.com/rust-lang/blog.rust-lang.org/pull/28

And you're very welcome. I couldn't do it without Mozilla paying my bills, though I hope to strike out on my own later in the year.


Obviously need a perusing stage before committing changes to the docs.


I also apologize beforehand for a long reply. Long posts get long replies.

I can definitely understand this point of view, but I just can't agree. The parent probably wants his/her claim that Nim seems as memory-safe as Rust to not be interpreted literally, as a literal interpretation would make the statement false (by any fair comparison using idiomatic code from both languages to accomplish the same thing).

What the parent is surely talking about is how it pans out in practice. Different languages have their different trade-offs here with different pitfalls, and denying that Nim can crash and burn due to memory management mistakes would be false. Denying it with respect to Rust would also be false, due to Rusts optional unsafe features, but the important distinction is how easy it is to make these mistakes in idiomatic code and what the consequences will be. Only time will tell, which is why anecdotes are of interest, of course - both the parents and everyone elses.

However, I find some choices of words to be a bit disingenuous (though hopefully unintentionally so).

The claim about being able to do "so much more at a low level", like e.g. being able to switch out libc variants, which allegedly is not possible in Rust due to accidental coupling. Is this a temporary difference? If so, it may only be relevant in the short term. I can't answer this question, but it would be interesting if someone did.

Most importantly: "Even though it's not as formally safe as Rust yet, in practice it feels and acts as safe, without the cognitive overload." Yet? Making Nim as formally safe as Rust would require completely changing key aspects of the language. Feeling as safe is possible, and acting as safe is possible too...

... until it doesn't anymore, that is, because the team grew (as it always does, some leave, some join, etc) and the code base ballooned and someone made a simple memory management mistake somewhere that is now a serious debugging problem and no code can be eliminated beforehand from the necessary auditing because the entire code base is vulnerable to these classes of errors.

Memory management errors have a way of resulting in seriously trashed core dumps, etc, sometimes severely complicating and limiting debugging possibilities. Where's my stack trace? Oh, we seem to have been executing data and not code. Where did we come from? Oh, no intelligible stack frames. No valid return address in the register, etc. I've been there, as I'm sure many of us have. Memory management errors can lead to complete debugging nightmares, and that's if they're even reproducible by developers. If they're only triggered at the customers site due to their unique circumstances, good luck. Having a deterministic test trigger it and being able to run it through valgrind until it's solved is the optimal cake walk scenario, but that's not real life most of the time.

Rust can step quite easily from low-level stuff to high-level features and meta-programming too, and I feel no comparison is really made by the parent, only talk of Nims features. The central premise as always for Rust is that it provides what it can provide while still maintaining memory safety. Rust without this prerequisite would not be Rust, and the constraints for everything else flows from it.

The repeated claim of design-by-committee is also not the best one. Having followed Rusts back-and-forths for years, I have to say I feel the discussion has been extremely well functioning, and most importantly: The choices have been very pragmatic within the constraints of preserving the key safety features of the language.

Personally, having gone through many languages all over the abstraction level spectrum and specifically having spent quite some time in embedded C/C++, I am terribly, horribly tired of fatal runtime errors in general and memory management errors in particular. They can cost so much time to debug and fix that development time can swoosh past what it would have been in a language with a type system preventing them in the first place. Your mileage may vary, of course!

There is something to be said for languages that simply eliminate these classes of errors compile-time, and that something is actually a lot. For the small programs, tooling, scripts... I can write them in anything. There are hundreds of choices. That's not what this is about. For the software that matters, that ships and that others will expect to work, I no longer have the patience or tolerance for these error classes.

Many languages with such safety guarantees (and Nim is not one of them) have already existed for a long time, but very few that can be applied to all the use cases that Rust can. That is what it's about. This is why people are excited.

Software development is a form of art and a form of engineering, at the same time. A lot of software doesn't have to be as reliable as space shuttle firmware, and I'm not claiming it has to, but the general bar could sure as heck be raised several notches. We know how the world works, and yesterdays quick hack or proof of concept is todays firmware shipment for use in live environments. Successful software lives for a long, long time. Software is eating the world, and society is now at its mercy.

Personally, I will sleep so much better knowing that these error classes were wiped out compile time in 99.?% of the code I shipped to those customers, while being able to maintain on par performance with the C code it replaced.

These are of course my $0.02, and I hope it didn't come across as combative as that was definitely not my intention - only passionately conveying my own perspective. :)


Thanks for the thoughtful response- it didn't come across as combative at all to me.

The true, provable safety of Rust was what drew me to it as well. I've always hated having to choose between un-principled memory management (with it's security and functionality vulnerabilities that can lie dormant for many years before kicking your butt) and garbage-collection forcing you away from the metal and removing deterministic reasoning about memory usage, runtime behavior, and runtime overhead.

I've been going through the academic papers, forerunners, and source-code for Rust's static memory routines and borrowing semantics. My hope and suspicion is that it can be added to Nim without core changes to the language like lifetimes. It's definitely not a guarantee, but with lots of experience in both languages now I feel very strongly that adding region-based-memory-management to Nim is possible while adding Nim's clarity, abstractions, and efficiency to Rust feels impossible.

I agree that at the moment Rust is the only responsible choice right now if provable memory safety is a primary concern, but I suspect that will change. In the mean-time, for us anyway, the price was too high in productivity when we discovered that we could do manual memory management in Nim in very well-considered isolated places and confidently use Nim's fast, real-time deterministic per-thread garbage-collection for everything else without a noticeable performance penalty.

Having said that, I don't think I actually disagree with anything you said (:


> I've been going through the academic papers, forerunners, and source-code for Rust's static memory routines and borrowing semantics. My hope and suspicion is that it can be added to Nim without core changes to the language like lifetimes. It's definitely not a guarantee, but with lots of experience in both languages now I feel very strongly that adding region-based-memory-management to Nim is possible while adding Nim's clarity, abstractions, and efficiency to Rust feels impossible.

I'm not so sure. The trickiest part of getting memory safety without garbage collection working is not the lifetimes but the borrow check, which relies on inherited mutability and, most importantly, the lack of aliasable mutable data. The APIs and libraries of garbage collected imperative languages invariably depend on aliasable, mutable memory. Consider something as simple as a tree or graph data structure with mutable nodes. Or consider taking two mutable references to different indices of an array, or splitting an array into mutable slices with dynamically computed indices. These are all things you (presumably) can do today in Nim, and a borrow checker would break them. The likelihood that the library APIs depend on being able to do it is very high.

I never say never: you could implement multiple types of references, some GC'd and some not, and copy the Rust borrowing semantics. But they would be incompatible with most existing APIs and libraries. I don't think it can be realistically retrofitted onto a language without breaking most APIs: aliasable, mutable data is just too common.

Regarding efficiency/performance, what in particular seems impossible to add to Rust?


Thanks for your reply! An enjoyable exchange in the midst of what often feels like a bit of a very tiring flame war.

I am constantly on the lookout for languages that could be suitable for replacing (or greatly diminishing) the use of C/C++ in my work, and so far Rust is one of the front runners.

However, I am also very much aware of some of the troubles I would most likely face in convincing my colleagues, like language complexity and productivity, and I completely respect the decision that it may not be worth it, depending on a wide variety of factors.

I try to keep an open mind, and I look forward to reading more about the improvements to Nim you envision! Thanks again (and good night). :)


To be fair to Nim, I don't see any reason why it couldn't be made memory safe by using the Boehm GC (though I'm not an expert in Nim by any means). Of course, using the Boehm GC negates the advantages of the thread-local heaps, but I don't think that Nim's implementation of them scales up to large-scale software in any case for the reasons I detailed in my other comments. IMHO, if you have a garbage-collected, multithreaded language that must compile to C (and doesn't need interoperability with a reference-counted object system like e.g. Swift does), the Boehm GC is the best choice.


Thanks for correcting me, Patrick. I certainly didn't mean to be unfair (especially as I replied to a comment I felt wasn't being completely fair itself, intentionally or not), and I should have been more precise about the use cases.

I agree about the GC considerations. I meant my points to mainly apply to the use cases where safety such as that offered by Boehm is eschewed in order to achieve other powers at its expense, which I feel is brought up a lot by Nim proponents as strengths during these discussions.


> The claim about being able to do "so much more at a low level", like e.g. being able to switch out libc variants, which allegedly is not possible in Rust due to accidental coupling. Is this a temporary difference? If so, it may only be relevant in the short term. I can't answer this question, but it would be interesting if someone did.

It's definitely intended that the Rust standard library can compile against many libc's. I personally hope that it can eventually be completely self contained and not even link to libc in certain configurations.


Thanks for clearing that up!


This is good advice, and I agree!

Anything larger than a small project will need to be put aside for a while from time to time (for whatever reason), and I've found that making it easy to pick it up again is one of the most important things you can do.

My strategy for this (admittedly most suitable for software projects):

1) Write down your large, fantastic vision for the finished, complete project.

2) Write down the smallest possible feature that would bring you some utility. MVP, if you like.

3) Take this requirements and write an automated test at the highest level of abstraction possible. It should almost read like the MVP requirement you just formulated. That doesn't have to mean BDD, just something you can read easily.

4) Implement until the test passes.

5) Go to 2.

What I've found over the years is:

- Quickly reaching milestones of utility are key for keeping motivation while working. This is obvious, of course.

- The written down vision brings the motivation and big picture plan back when picking things up: Get your blood pumping again!

- The automated tests help keep implementation going when your drive falters. What should I do know? You should just make the test pass. Nothing harder than that. No need to think super hard. Just try to make it pass.

- The automated tests help you get hacking again without the immediate fear of breaking things: Get the ball rolling again, quickly and easily!

- The automated tests also serve as your requirement specification so far: You can easily see what already works.

Side projects are so important, and one should make it as easy as possible to sustain and restart them! Good luck!


I'm a bit confused by this approach. Is it related to some established design approach or principles?

What is MVP or BDD? Could you provide an example of your process?

Would it be similar to this example? 1. I want to put out a rpg for iOS. 2. Implement a pokemon style battle system -> when health < 0, NPC dies. 3. Not sure where automation fits in? 4. Code it and test until it works. 5. Move on to adding experience when killing an enemy, etc.


Sorry about the acronyms.

MVP occurs a lot on Hacker News, in that context meaning Minimum Viable Product, or the least amount of work you must do to make it useful for users/customers.

BDD in the software context means Behaviour-Driven Design and is an opinionated way of writing tests.

With regards to step 3 in your example: Sometimes it can be challenging to design automated tests for certain functionality.

For a game, it could be scripting "fake" user input that is supposed to have a certain end result, and then verify that that end result actually occurred. E.g., the player makes a few good moves, then a few bad moves that should cause health to drop below 0, then check that the player actually died.

Automated tests are a life saver, and often it pays off in the end to learn how to write them for almost any functional requirement.

I hope I answered your questions!


Some advices in this post are related to restrictive tooling and some are related to adjusting ones attitude. Whereas I used to rely on the former, nowadays I find the latter brings a completely different (and better) kind of calm.

This article [0] is one of the best ways I've ever seen it put, and I think both the culling and surrender parts it mentions are very important - not just one of them.

Inner calm is so important. Like countless others in this post have said: Try meditation, and give it time. It can be an important piece of the puzzle of silencing those voices.

Good luck.

[0] The Sad, Beautiful Fact That We're All Going To Miss Almost Everything: http://www.npr.org/blogs/monkeysee/2011/04/21/135508305/the-...


> There is absolutely a place for an API like this -- but that place is not as a replacement for OpenSSL.

I disagree. I think it is exactly the place for an OpenSSL API replacement.

As is alluded to in this link (thanks cremno) [1], they want a critical mass of users before it makes sense to deprecate the libssl API. Deprecation seems necessary in order to perform the serious cleanup/reimplementation under the hood that they would like to do.

While new projects may have an easier time, they admit that adoption by existing projects is significant work. However, so many and large projects already exist that it seems that many of them need to be won over in order for a deprecation to be practical. Many existing projects (see other comments in this post) use this transport layer flexibility in the OpenSSL API, and not offering it with the new API may slow adoption.

Yes, like you say another API layer could be added, and it is a deeply subjective matter so let's agree to disagree... But composability is so important and only becomes more so as we tire of wheel reinventions leading to unnecessary bugs. Enabling modularity and reuse needs 1st class API attention. Being able to easily plug this new API as a source/sink onto the myriad of existing (and yet to be created) I/O frameworks would be a major win. This is my $.02 and humble appeal to the people doing the fantastic work on this lib. My thanks to all of you!

[1] https://marc.info/?l=openbsd-tech&m=141524972826918&w=2


I agree whole-heartedly, and wrote as much in another comment below. This should not be overlooked.


Please correct me if I'm wrong, but I believe that the OpenSSL API allows one to implement the actual transport of the TLS protected data however one chooses, although it provides convenient socket/fd transports. Is this a possibility with this new API? Regardless of whether OpenSSL can do it today, I consider this a very valuable feature and allows for more flexible composition of the TLS implementation and the rest of the system.

Edit: Grammar.


> I believe that the OpenSSL API allows one to implement the actual transport of the TLS protected data however one choose.

Yes. Through the terrible BIO_* API.

It's imperfect, but it lets you treat TLS as a "black box" that you shove encrypted/cleartext data into, and get cleartext/encrypted data out.

This API is good for socket communication. Nothing more.

This API could be extended by adding "underlying" read and write functions. Off of the top of my head:

typedef int (tls_underlying_io)(struct tls ctx, void u_ctx, const void in, size_t inlen, void out, size_t outlen);

int tls_set_io(struct tls ctx, void u_ctx, tls_underlying_io u_read, tls_underlying_io u_write)

Where "u_read" is used by libtls() instead of calling read(fd,...), and u_write() is called by libtls instead of write(fd, ..)


The hostname also needs to be verified somewhere in there.


From the link:

* tls_connect() connects a client context to the server named by host. The port may be numeric or a service name. If it is NULL then a host of the format "hostname:port" is permitted.

* tls_connect_fds() connects a client context to a pair of existing file descriptors.

* tls_connect_socket() connects a client context to an already established socket connection.


I did read the linked page, but perhaps I'm misunderstanding something. To me, this looks like it still requires sockets or fds. I want to be able to grab the TLS-protected byte stream conveniently and transport it to the other endpoint however I see fit. This would make it possible to use this TLS implementation easier on top of I/O-frameworks that don't necessarily map immediately down to sockets/fds. Again, for improved composability and code reuse.


> To me, this looks like it still requires sockets or fds.

Yes. Which makes it useless for protocols that carry TLS. e.g. EAP. So I can't use it in my pet project: http://freeradius.org/

The new API is useful if you want to do TLS over sockets. It's completely unhelpful for everyone else.


It is very much a kluge but you can use pipe(2); it will give you two fds both of which you control. Then you can read/write from the "local" end and serialise the traffic however you wish.


Yes, it's possible to "communicate with self" in various ways to intercept, but it's really ugly. Also, performance may suffer.


Since they did not deprecate much (any?) previous APIs yet, you still have that possibility.

They just added a new API that answers the most common use cases in a simple, straightforward, and internally consistent way.


And I really like the new API. Kudos to the creators. I would really like to use this new API in a simple, straightforward and internally consistent way - and grab the byte stream after encryption to feed it into my existing I/O framework. Please forgive the sarcasm, but all my comments are derived from a will to see this new API succeed, which is why I don't want this use case to be overlooked while designing it. If I was happy staying with the original OpenSSL API I probably wouldn't have voiced my concern as fervently.


I believe so too. It seems like a reasonable security tradeoff to limit the kinds of bugs that will be the result of such flexibility.


> I believe so too.

That sounds strange, as I believe we're in disagreement. I believe the flexibility should exist, and choosing the transport medium of already encrypted data should be an option for the developer. This is completely unrelated to the security critical configuration of the TLS protocol, i.e. certs, keys, ciphersuites, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: