Hacker Newsnew | past | comments | ask | show | jobs | submit | wsfull's commentslogin

Care to be more specific than "underlying stuff". The code and design of xbps looks nothing like the code or design of pacman or apk.


Actual package recipes. BSD-style packaging systems are written in BSD make using a mk/ framework rather than Arch-style shell scripts. If a package already exists for FreeBSD Ports I would check that first because it's much more similar to what I'm used to.

Fundamentally pkgsrc is built on very low level tools, awk, shell, make, cwrappers, and pkg_install, although you can manage it with high-level ones (obviously pkgin is there, but there's also a lot of other third-party tooling).


Why are you comparing pkgsrc to pacman. The orginal comparison suggested in the parent comment was between xbps and nixpkgs or other Linux package managers.

pkgsrc boostraps itself using a program called "boostrap" in the pkgsrc directory. It builds a version of NetBSD's GCC toolchain and has no reliance on the host's userland. IME, this is very reliable and is its single greatest strength. Beyond that, pkgsrc is only as good as the build processes chosen by the author(s) of the software being built. These of course vary widely in sanity and depending on the packages one is building the otherwsise sane pkgsrc build process can quickly become a black box with packages that pull in many dependencies. It does break sometimes, but this is a fault of the target software authors, not pkgsrc. pkgsrc is first and foremost a system for building packages. Linux distributions OTOH tend to be much more focused on binary package managers. Most Linux users do not build packages from source. There is really no comparison.

Your comments do not sound like those of a daily NetBSD user. I have been one for the last 15 years and am therefore all too familiar /usr/share/mk. Most Linux users do not seem very comfortable with BSD makefiles.


> Why are you comparing pkgsrc to pacman.

The original commented suggested that XBPS was similar to pkgsrc. I suggested that XBPS has far more in common with Arch Linux's package manager than pkgsrc or anything else in the BSD world. As you yourself note in your next paragraph...

> Your comments do not sound like those of a daily NetBSD user.

I have commit access. Since three years ago. My name is on the latest release announcement.

You prick.


That's not nice and totally uncalled for. Three years is a very short time.

The original comment did not compare xbps to BSD package managers. What it said was that xbps was written by a former NetBSD developer and thus arguably has different qualities than other Linux package managers. It is a different approach from someone with different sensibilities than a Linux developer. I have read comments from Void Linux mantainers and users that say more or less the same thing.


If the cost to students attending universities that have adequate subscriptions is small -- previously, commenters suggested this portion of their tuition amounted to only a small annual fee -- then what would be the reasons the publishers would not offer individual subscriptions at similar cost to the public? As any serious student knows, the "a la carte" prices are absurd -- how successful have they been with this idea?

One silly idea deserves another:

Generally, journals manage their subscriptions through filtering on IP address ranges.

Imagine a customer-ISP agreement that had an option whereby a subscriber could "opt in" to academic journals for a few extra dollars per month.

These subscribers might then be assigned addresses in certain designated ranges by their ISP.


If you are using your own DNS cache listening on a loopback interface, then the risks of "DNS cache poisoning" are not what they are when you use a third party cache like your ISP's, OpenDNS, Google or the ones in the CSV file at the github page referenced in this blog entry.

Unless you are sharing your loopback with the network somehow, one could argue that with a localhost cache the risk is nonexistent.

According to this blog entry the reason for using DNSSEC is to "prevent DNS posioning".

If that is true, and that is what I would expect, and we've minimized the risk of cache poisoning by using a localhost cache exlcusively (no third party caches), then why use DNSSEC?

If there are other reasons, like making centralized control (censorship?) of DNS easier for ICANN, etc., then this blog post has omitted them. Does that imply they are insignificant?


> If that is true, and that is what I would expect, and we've minimized the risk of cache poisoning by using a localhost cache exlcusively (no third party caches), then why use DNSSEC?

How have you authenticated that your ISP isn't intercepting DNS requests and serving them out of their own cache that lies?


Connecting to the remote service and validating the ssl cert.


Yes.

As yet, there's still no foolproof way to verify/authenticate an endpoint on the internet. Not to mention the issue of so-called "host security".

Checking for a file on the remote host, e.g., a cert or a key a la ssh, seems to me a more sensible approach than relying solely on the promises of a "trusted third party" (CA's, ICANN, registrars, etc.) that you have never met.

Fears of ISP's intercepting port 53 traffic was not the reason why DNSSEC was revived from the failed protocol graveyard. ISP's do not have to go through the trouble. Most of their customers have the ISP's resolver addresses in their DNS settings, not a loopback address.

And although the software is available almost no ISP customers are encrypting their DNS packets.

DNS data for the public www is public information like the telephone book. It is easy to obtain. And once you have it, not only can you monitor changes, but there's little need to even run a cache.

You can just pluck out the names you need and plug them into your own authoritative server. Edit resolv.conf to point to it and you're done.

Easy way to speed up your www browsing and still very effective at blocking ads, apps phoning home, etc.


Where does your local DNS cache gets the record from ? Unless you can verify the information is legit it actually aggravates the problem. Now any MITM can poison your computer for the duration of the cache TTL whereas before they had to maintain the MITM or poison upstream for the same effect.


If you got a free 32-bit version while while the terms were "shockingly liberal" then maybe those are the terms that govern the use of that binary? I don't know for sure.

I do know in addition to changing the license they have made changes to the software since that time. The size of the binary has increased.

I worry more about Kx being acquired by some large company, maybe a competing database vendor, that cares little about software quality.


yes, if you got the binary during that "liberal" period, you are free to use it under "liberal" terms.


Here's my tl;dr:

1. Interix/SUA subsystem was not developed by Microsoft. It was acquired from a company called Softway. It was used internally to transition Hotmail from FreeBSD to Windows. It is believed some important MS customers also made use of Interix and possibly came to rely on it.

2. How to explain MS seeming ambivlance toward a POSIX layer on top of Windows? Idea: Windows API is so complex (convoluted?) as to exclude competition. See Joel On Software reference. He marvels at Windows' backwards compatibility - being able to run yesterday's software on today's computers. Yet he also admits MS strategically developed software that would not run on today's hardware, but only on tomorrow's. (Not intending to single out MS as I know other large companies in the software business did this too.)

Complexity as a defensive strategy. Who would have guessed?

Many years ago, I gave up on Windows in favor of what I perceived as a more simple, volunteer-run UNIX-like OS that was better suited to networking.

As it happens, unlike Windows, _all versions_ of this OS run reliably on most older hardware. Although it was not why I switched at the time, I have come to expect that by virtue of the UNIX-like OS, my applications will now run on older as well as current hardware. I rely on this compatibility.

Unlike Windows I can run the latest version of the OS on the older hardware.

Windows backwards compatibility is no doubt worthy of praise, however the above mentioned compatibility with older hardware is more important to me than having older software run reliably on a proprietary OS that constantly requires newer hardware.

The 2004 reference Reiter cites on the "API War" suggests people buy computers based on what applications they will be able to run.

Unlike the reference, I cannot pretend to know why others buy certain computers. Personally, I buy computers based on what OS they will be able to run. Traditionally, in the days of PC's and before so-called smartphones, if you were a Windows user this was almost a non-issue. It was pre-installed everywhere.

At least with respect to so-called smartphones it appears this has begun to change. Maybe others are choosing to buy computers based on the OS the computer can run? I don't know for sure.

As for the "developers, developers, developers" and availability of applications idea, since switching to UNIX-like OS, being able to run any applications I may need has been a given. In fact, I have come to rely on applications that will only run on UNIX-like OS!

And now it seems MS is going to make running UNIX applications on Windows easier. Why?

As with Interix, will the reasoning behind this successor POSIX layer remain a mystery?


BTW, HN does nothing special with underscores, but matched asterisks are converted to italics.


>MS strategically developed software that would not run on today's hardware, but only on tomorrow's

what do you mean by that?


If you follow the "API war" hyperlink, it's under the heading "It's Not 1990".

When consumers are upgrading their hardware regularly as they were in the 1990's, then developers can disregard the notion of users "upgrading" their software.

Instead they can just write applications targeting new hardware. It does not have to run on older hardware.

The user will be compelled to upgrade the hardware and, in the case of Windows, by default they get the new software. The example cited was Excel versus Lotus123.

MS also benefitted from hardware sales through agreements with the OEM's.


Code bloat. Yes, it runs like crap on today's hardware, etc.


The only problem is if one uses UNIX instead of Windows to do development.

Fortunately there are still some people doing development on UNIX and sharing their software tools.

http://cr.yp.to/highspeed/fall2006.html


No, I do not think you are being negative. Assuming you are approaching HTTP/2 from the user's perspective.

Honest opinion: HTTP/2 benefits advertising companies* like Google more than it benefits users, if in fact it even benefits users at all.

Perhaps the most oft cited "benefit" to users is encryption. PHK addresses that one.

He has a nice line in there about secrecy vs privacy. It's an important distinction.

And he reminds us that Google does not care if users are more or less unable to use self-signed certs.*

*I saw a prominent Googler refer to his employer an "advertising company" as well as express his distaste for allowing users to choose to use self-signed certs (policy reasons).

Of course I'm assuming users want less advertising, not more.

In the case you are a user who wants more advertising, and you want it delivered as fast as possible, and you are not concerned about what information is being sent from your computer to the advertising companies when you use a browser... like the ones written by advertising companies or organizations that are paid by advertising companies, then please disregard my opinion.

I'm biased. I want less advertising, not more.

Plus I have been a long time HTTP/1.1 pipelining user, using a simple HTTP client. I use pipelining to download 100 or more pages of HTML or plain text at a time over a single connection, not 100 ads or other junk I do not need. All in a fraction of the time that it takes one of today's bloated www pages to load in a "modern browser".

Whatever the "benefits" to users that HTTP/2 proponents have cited, I have found they can be easily countered or dismissed.

HTTP/2 benefits advertisers, no doubt. What else would you expect when the draft spec was written by an advertising company?

It's enticing to think about the successor to HTTP/1.1 and what could be improved. But as a _user_ this version of "HTTP/2" is not what you want.


I see it benefiting web applications a whole lot more than advertisers?


Don't you see? Web applications fall in the bucket of "today's bloated www pages", which are to be ignored. Real users only care about navigating using Lynx on a typewriter.


Assumption 1: the primary customers of increasingly expensive subscriptions from academic publishers are universities.

Assumption 2: the cost of the subscriptions is passed on to the primary customers of the universities -- students.

Assumption 3: assumptions 1 and 2 are correct.

Question: Is there any correlation to be made between the profitability of such academic publishers with the astounding rise in university tuition in the US over the past three decades?


WHo cares about the US? The collective knowledge of humanity is at stake. The keepers are restless, believe me. And most of them do not even speak English.


As much as it is a broken system, in this case likely no. The actual subscriptions are cost prohibitive pretty much just to individuals and to organizations outside western Academia. The distinction of "western" is only relevant in that from what I recall working with Librarians at my last University job, the publishers do business differently with the US and parts of EU than they do elsewhere in the world.

Rising tuition costs are an issue, but in relation to an academic budget, journal subscriptions will still account for a very small percent of the overall budget, even for relatively small University.

The Publisher/University subscription loop, however, is indeed real and has been covered here before I believe. The short of it is that quite a few publishers are basically held together by taxpayer money in the form of continued library subscriptions. The need for a middleman in these cases is pretty questionable, as the publishers basically just host the content at this point and do no curation or peer review themselves.


Assumption two seems dubious, because universities in the rest of the world pay for the same subscriptions, but don't pass these costs on to their students. It may be true for the US though.


> Assumption 2: the cost of the subscriptions is passed on to the primary customers of the universities -- students.

This is not likely to be true. Most universities in the US take overheads on grants. So if you get a $20,000 grant, the university will often take up to $12,000 of that (usually it's in the $6000–8000 range), and the researcher uses the rest. That money in overheads is used for a variety of things, including supporting the research (i.e., paying for journal subscriptions).

> Question: Is there any correlation to be made between the profitability of such academic publishers with the astounding rise in university tuition in the US over the past three decades?

Journals are expensive, but they're not _that_ expensive. It's much more plausible that the rise in tuition can be linked to non-academic expenditures at universities, including athletic facilities (e.g., gyms for students), fancy dining halls and dorms, and non-academic staff. Maybe one could argue those aren't the drivers of high tuition, but I'd be surprised if a rise in journal costs were remotely comparable.


> Journals are expensive, but they're not _that_ expensive.

https://mako.cc/fun/overpricetags/

Would $25,888/yr for a single journal not count as '_that_ expensive'?

But I agree they're not likely a tuition driver. After all, institutions can simply drop subscriptions: they don't do much for attracting students compared to facilities.


> Would $25,888/yr for a single journal not count as '_that_ expensive'?

For a journal cost, $25,888/yr is expensive, but no, it wouldn't count as "_that_ expensive". I meant "_that_ expensive" as "expensive enough to be a tuition driver".


Presumably some, but from what I read, much of the increase in tuition it tied to increases in administration costs.


While the cost for access to databases is quite high, it probably only a few dollars per student at most.


Which is interesting in itself, because it means that they could offer affordable personal subscriptions without losing money.

So in fact huge journal subscription fees becomes a way to trap academic research in these institutions, rather than just journal greediness.


I became an admirer of this man's software aesthetic when I first discovered and compiled libero.

In my opinion, this is a great loss to the world of programming.


tinysshd does not have the 90's crypto by default. It is optional. Nor does it have the "extreme complexity".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: