Hacker Newsnew | past | comments | ask | show | jobs | submit | more forkqueue's commentslogin

Part of the reason small size keys like this are in common use is that bind (and possibly other name daemons) doesn't allow records larger than 255 characters. DKIM requires one to put the public key in DNS.

Earlier this week I set up DKIM, and initially tried and failed to use a 2048-bit key, because of this issue.


  $ openssl genrsa -out private.key 1024
  $ openssl rsa -in private.key -out public.key -pubout -outform PEM
  writing RSA key
  $ grep -v '^--' public.key | wc -c
  220
So, 1024 bit key should be ok.


It's not specific to bind:

RFC 1035 says the following:

  2.3.4. Size limits
  
  Various objects and parameters in the DNS have size
  limits.  They are listed below.  Some could be easily
  changed, others are more fundamental.
  
  labels          63 octets or less
  
  names           255 octets or less
  
  TTL             positive values of a signed 32 bit number.
  
  UDP messages    512 octets or less
Although if the response to a query doesn't fit, the TC bit is supposed to be set and clients should retry over TCP. This is rare enough in practice, though, that not all implementations bother.


I hear it's solved all their scaling problems at one fell swoop ;)


I think they swapped my account onto the old servers Gabe was on, because whilst mine was fast when I read the article, suddenly it's very slow (as in, 30 seconds to send an email slow).

I'd love to use Google Apps for my business, all employees already have Android phones so from that point of view it's a great idea. Things like this are what is stopping me from actually doing it.

That and I don't know if I could stand to be parted from mutt.


I'm not sure the hardware is so custom - the pictures on the Clustrix site look very much like generic Supermicro x86 servers with a Clustrix logo stuck on them.


That's not quite what it says - the one that wasn't covered by the most recent audit isn't a candidate for removal until 1/1/2012 at the earliest. The other certificate is completely unknown.

This thread gives more context:

http://groups.google.com/group/mozilla.dev.security.policy/b...


That's a bit more information, but the headline here at HN is still overblown; we're down to one unidentified root certificate that was probably issued by RSA, but for which no records can be found. Most probably attributable to incompetence and poor recordkeeping rather than a malicious compromise of the whole PKI.


If I was going to sneak in my own root cert, I would give it a name and date very similar to an existing one.


And wasn't there some discussion in the last week or two about how easily you could impersonate anybodies valid ssl cert if you could get hold of a real root cert? (something about browsers not notifying users that a previously seen cert is now authenticating via a different root?)


I am not a crypto expert so correct me if I'm wrong, but as I understand it, anyone with a root key necessarily can subvert the entire system in a straightforward way.


From that thread:

Both "RSA Security 1024 V3" and "RSA Security 2048 V3" are shown as valid in Apple's System Roots.

Microsoft's list includes "RSA Security 2048 V3", but not "RSA Security 1024 V3".

I'm glad there's more transparency about what is added to the NSS certificate store nowadays.


There's a good chance $30 pricing will be at least half as popular as $10.


SystemTap is great for general system issues, but isn't nearly as good when trying to look at issues that occur inside the JVM.


PS. Check out http://fedoraproject.org/wiki/Features/SystemtapStaticProbes for info on current Java status (which seems to be more developed than we think).


True. I believe they're adding providers to the major VMs though (I mainly work in Python so I focus on the cPython VM, but I have heard of some early JVM introspection work too).


Yup, DTrace and ZFS are pretty much the only things Solaris brings to the table above what Linux offers.

Interestingly I've never come across any pure Solaris shops that run ZFS, I suppose because if you're conservative enough only to use Solaris, you're too conservative to trust ZFS.


Zones too. I think they offer more then what the usual jail does. Never really used both of them, but.. yeah, zones sound nice, too :)

Also, solaris is probably the most stable OS i've ever worked with.


Solaris zones are awful, and a good example of where Linux is far superior to Solaris. The creators of Zones seemed to have user-mode-Linux as their model, and it shares most of the annoyances of that too. It's not virtualization, and it's not a jail - it's a middle ground which is good for neither.

Agree with you to some extent on stability - I've seen Solaris boxes with a load of 300+ still up and running although unusably slow, where Linux would almost certainly have died from resource starvation. In general use, both Linux and Solaris are pretty damn reliable though.


Whats so bad about zones? It's clear that they are no virtualization. As you said it's a middleground, more capable as jails but not virtual, which is fine if you don't need it?


I'd add SMF (unless you know about something similar that's built-in into Linux ?).


What does that get me over Upstart or the other event-driven init systems?

(This is a question of genuine curiosity. SMF has led to many headaches when I've had to deal with it, simply because it didn't seem to comply with normal conventions like returning nonzero for failure or actually printing what goes wrong to the console.)

EDIT: add trailing parenthesis.


XML based config files!

Just kidding.


DTrace and ZFS are both available on FreeBSD.

Call me a BSD bigot (even though I use Linux/OS X at work) but I really fail to see any rational reason to use Solaris at this point.

We have a Solaris database + filestore at work, mostly for Z-raid, but now that FreeBSD has ZFS, when it gets EOL'd (with a vicious grin on my face) it's going to be a BSD box that replaces it.


A couple of points for those thinking about doing something similar:

"Since InnoDB stores the table in the primary key, I decided that rather than use an auto_increment column, I'd cover several columns with the primary key to guarantee uniqueness. This had the added advantage that if the same record was inserted more than once, it would not result in duplicates."

The 'correct' way to deal with this in MySQL is using the auto_increment_increment.

http://dev.mysql.com/doc/refman/5.0/en/server-system-variabl...

Of course, the real difficulty with mutli-master setups split across data centres isn't ensuring uniqueness of primary keys, it's ensuring data-integrity under a split-brain scenario, i.e. where one server can't reach the other, but users can reach one or the other. UPDATEs and DELETEs to rows can then become extremely difficult to merge back together. This wasn't a problem for this application, but as others have commented, this use case probably wasn't best suited for an RDBMS anyway.


with an autoincrement id, duplicate rows may get inserted. having a primary key derived from the data results in duplicate rows getting discarded (this is important).

secondly, the autoincrement id adds 4 bytes to each row which are never used for anything. only use an id if you need to reference a row from another table.


duplicate rows may get inserted

Make that duplicate rows WILL get inserted -- even if only due to network glitches causing connections to die after the database adds the row but before the client receives the acknowledgement resulting in the client retrying the request. Unless you don't retry failures, in which case you lose rows instead, of course.


That's a good warning sign that the data isn't relational in the first place. An RDBMS would probably be a good fit for the analyzed data (which is likely to have explicit relations), though.


No it's not. There are plenty of tables that use composite keys that also contain foreign keys.

The use of ORMs like active record, many of which choke on natural keys, has turned a lot of devs into automatons for artificial key creation. Natural keys are often superior.


duplicity is great for off-site backups - full encryption, supports a wide variety of destinations (amazon s3, FTP etc)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: