I run a company that provides sysadmin consultancy, and we deal with problems like this on a pretty regular basis. We're not on your continent, but would still be happy to help.
This doesn't sound like a loan to me - it sounds like a contract where the party buying equity is given priority on dividend payments, which as I understand it is reasonably common.
Of course, the money not actually appearing is a different matter.
We're currently using entp's tenderapp, and I'm not particularly impressed so I'll definitely be giving you a try.
Some of the problems we've had with tenderapp:
* Customer's emails getting flagged as spam even though spam checking was supposed to be disabled
* Customer's email just 'vanishing' (I've been unable to verify this one, on account of there being no traces of the emails whatsoever)
* Attachments not working
* Attachments taking forever to be acceessible when they are working
I think a per support staff user charge is a good idea - it's simple and straightforward.
Thanks! We're working on getting attachments working by next week. I've heard that others have problems with Tender too — hopefully we'll be able to fix those.
Also they're using non-ECC RAM (the processors they use don't even support ECC), which is much cheaper and also a very bad idea for anything you care about. Data corruption isn't fun, especially when it's subtle and has existed long enough for even your 90 day old backups to have it.
I wonder what the real-world failure/corruption rates are on ECC vs. non-ECC RAM. Does anyone know of any studies? I'd like to know if the differences are practical or theoretical.
I didn't find this at all. Plugged optical out on the motherboard into the optical in on the amp, set mythtv to let the amp decode dd/dts itself and everything worked fine.
I agree, nginx seems completely superfluous here. Actually, it's worse than superfluous - the way it's configured, nginx seems to be caching dynamic files but never expiring them.
A reasonable summary, but one that demonstrates some misunderstandings of the Linux world. To deal with the issues point by point:
* FreeBSD has better performance than Linux? This is, as you almost concede, no longer true.
* Lack of kernel modules an advantage? Hardly. Linux loadable kernel modules produce no extra performance overhead, but make life a lot simpler for everyone. The only extra 'bloat' is having ~100MB of drivers of which you probably only need 10% or so. If this an issue for you (ie, you're running an embedded system) you can of course cut down the modules to only those required, or even compile into the kernel just as with FreeBSD.
* The GCC compiler and toolchain aren't optimal, and I fully expect a major Linux distribution to switch to Clang/LLVM in the not too distant future. If this is a success (and I expect it to be), I wouldn't be at all surprised to see most/all Linux distros switch. This isn't a Linux vs FreeBSD issue, this is GCC vs Clang/LLVM - a battle I believe Clang/LLVM will win.
* As far as I'm aware TrustedBSD offers no more features than SELinux, and the Linux ACL system, but maybe I'm showing my lack of BSD knowledge here.
* Xen and VMWare virtualization is there but not 100% in FreeBSD. They're rock solid on Linux, which also has KVM, a very viable alternative to the two.
* The development branch/stable kernel approach is irrelevant - people don't use Linux, they use a Linux distribution (OK, OK, a GNU/Linux disitribution :)). In most distributions point releases are binary and API compatible with .0 releases (RHEL 5.5 binaries are guaranteed to run on RHEL 5.0).
In conclusion, is FreeBSD good? Undoubtedly. Is it better than Linux? It's hard to make a case for that. It has less device support, less software support and a massively smaller user base.
FreeBSD is nice to tinker with, but it doesn't offer enough benefits to justify putting production software on it. It's great to learn about the old school UNIX way of doing things, but that's not what I want to be doing on a live server.
>The GCC compiler and toolchain aren't optimal, and I fully expect a major Linux distribution to switch to Clang/LLVM in the not too distant future.
Have you looked through the Linux kernel code? There is a lot of gcc specific stuff in there. I don't think the Clang change will be as fast as you think.
It could actually be quite bad - maintaining a large featureset like this in a fork would probably cause major problems. Fortunately GCC compatibility is listed as one of the official goals of Clang.
* That point about performance was more about FreeBSD being first with a lot of performance and security improvements. As I said, this argument was a lot easier 5-10 years ago but doesn't apply as much anymore.
* The Mono vs Micro kernel debate between Linux and BSD doesn't apply as much anymore since they both support both. With Linux is varies a lot between distro vendors. Vendor kernels also tend to lag behind the main branch (at least this is my experience, mainly with Debian and Fedora). I find that FreeBSD, as a distro, allows easier kernel optimization especially since the default kernel is tight. Again, this is something that on the Linux side of the argument is the responsibility of the vendors, who approach the issue by providing targeted server and desktop distributions (most do). You are right though that there is very little performance diff between Micro v Mono (see the classic debate from when Linux was announced: http://oreilly.com/catalog/opensources/book/appa.html). I have always preferred only having code on the server that is required by the system (code coverage), and the FreeBSD way of doing this is managed better than any of the Linux distros, IMO.
* As somebody mentioned below, the Linux kernel is very much tied to GCC and its toolchain. It is so tied to it that the Intel compiler uses the Linux kernel as a test for its GCC compatibility mode.
* A BSD port of SELinux is actually part of TrustedBSD, which has in-turn been ported to FreeBSD. There are a bunch of other things that TrustedBSD entails, I can't name them off the top of my head atm, but both SELinux and TrustedBSD line up with orange book
* You are correct that the stable/release cycle of Linux is more up to each distribution, but since there is a step between kernel and distro (which FreeBSD as a total operating system doesn't have) there is a lag there, and a risk of support ceasing (or patches/updates/support suddenly becoming a commercial service, as it did at Red Hat)
In the end it depends on how you define 'better'. It is tiresome to enter debates about benchmarks and features, it is what you are most comfortable with. If you want to hack at a UNIX operating system that is very neat and cool, FreeBSD is a very good choice. If you like knowing what every file and every command does on your system, and prefer a UNIX-like uniformity in how things are done, then FreeBSD is again a good choice.
I would thoroughly recommend all hackers try out FreeBSD and become familiar with it. It will alert you to why some things are done the way they are, and there is a very deep history in that operating system, so at times it feels like opening a time capsule.
Add to that, if you need an OS for commercial purposes, FreeBSD is free as in 'do whatever you want', which can also be an advantage.
Red Hat concentrated on the enterprise. They certainly haven't abandoned it - they bought Qumranet as much for their desktop virtualization solution as the KVM technology.
These numbers show that Red Hat did the right thing too - they're a highly profitable company, and they're bringing massive value to the community. I really don't understand why you seem so aggrieved at a company which contributes so much.
They are highly profitable? Thank god. I don't know enough about them as a company, but it is really good to know they aren't teetering on the brink of destruction. They are currently in my list of companies I really want to stick around.
I had some investments in the company for a while. It seems to be very well managed. There revenues have had a decent growth over the years I've followed it.
The main issue with using DNS for failover is that the name servers of many major ISPs don't respect TTL values, particularly those under 3600 seconds.
As such, when a failover occurs a site will be unavailable for many viewers for several hours and possibly even days, making the solution anything but highly available.
Hi forkqueue, DNS pinning is indeed an issue, the scope of which this experiment will help identify. However, this would only affect those with an existing record cached for an unavailable node, no new requests would be directed that node. Futher, a Mammatus cloud can also be used as a back-end service for another front-end service which can evade the cache by prepending a random string or timestamp to the subdomain, i.e. 123.mammatus.thimbl.net. Mammatus ignores subdomains of it's subdomain, so this works. Also, information from research into DNS rebinding attacks shows that browser based pinning drops it's cache when it gets a failed requests, so a re-request would also work after failing once. Mammatus is not meant to replace all other HA techiniques. Certainly for organizations with the appropriate budget, there are potentially more robust options for front-end systems. For many situations, this is a good technique, considering how inexpensive it is. Also, this same issue exists for systems like dynamic DNS, which remain quite popular in many use-cases.
maybe it's about the defintion of high availability. If you need 99.999999% a DNS based solution will not work.
But even Google, Yahoo and others use a short TTL to switch between their datacenters… So this should be "reliable enough" for nearly everything out there.
$ dig www.yahoo.com
;; QUESTION SECTION:
;www.yahoo.com. IN A
;; ANSWER SECTION:
www.yahoo.com. 48 IN CNAME
fp.wg1.b.yahoo.com.
fp.wg1.b.yahoo.com. 2711 IN CNAME eu-
fp.wa1.b.yahoo.com.
The main issue with using DNS for failover is that the name servers of many major ISPs don't respect TTL values, particularly those under 3600 seconds.
I've seen this repeated in many places but never with any concrete examples. This leads me to wonder just how widespread the practice is — can you substantiate the claim?
I don't know much about ISP DNS configurations, but the problem extends to clients as well. For example, until Java 1.6, the default TTL for DNS lookups was forever.
Just to be totally clear — you've explicitly done a lookup against their resolvers for a record that is not in flux and received a TTL higher than what the authoritative servers give out? Any chance you could post their name server addresses?
No, I've altered DNS and seen many thousands of connections continuing from these providers even after low value (300) TTL records have been changed several hours previously. I've observed this on a number of occasions.
In the case of Virgin Media, they have transparent proxy caching on some parts of the network I believe, so it's possible this caching was happening there rather than on the DNS servers themselves.
Well, not much that can be done about caching proxies. Regarding DNS, have you been able to look at whether the IP's from failed requests show up on the new IP shortly after? I.E perhaps browser-pinning from open sessions?
Agreed. A better solution is to have all nameservers always return 2 A records in response to queries for a specific domain name. That way the browser will itself be responsible for trying to connect to each of the IPs to determine which one is alive (then with DNS pinning it will stick to the working one). And it does not matter if an intermediate DNS caching resolver ignores the TTL.
Of course the ultimate solution for HA is to implement anycast with BGP...
Hi mrb, this is just the first release. Mammatus will return 2 A records if the endpoint domain does, so that will be built in. Of course anycast is a great option if the organisation has the option of getting an ASN, running BGP, etc. If that is not an option, Mammatus is a reasonable alternative.
http://bashton.com/
I very much doubt it would be possible to resolve the problem without actually logging onto the server and looking at what's happening.