Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Love this. I've got an old Pentium box I thought I was going to have to hack up the BIOS to get larger IDE disks working, but don't you know it the motherboard manufacturer (AOpen) still had the BIOS update files on their site and after updating I could boot 32 GB disks on a board that only claims to support up to 8 GB. I think the lesson is, at least check for BIOS updates before you throw in the towel :) Maybe not everyone needs to go as far as hacking ACPI tables in memory, but it sure sounds like fun.


In that era, you could often just ensure that the BIOS could load the bootloader/kernel by placing it into the region still reachable by the BIOS. OSes like Linux (and anything Windows NT based) practically completely disabled[1] the BIOS anyway, and their own IDE drivers would be recent enough to understand how to address the full disk.

It was not uncommon to have a small boot partition at the beginning of the disk for that purpose.

[1] Bit of a simplification. In reality, the BIOS being 16bit real mode code meant that you had to jump through very elaborate hoops if you wanted to use it in your protected mode OS in any way, and then for questionable gain.


Oh yeah, definitely. I could get it to boot by marking the disk as 8 GB in BIOS, but this had side effects:

- I'm using CompactFlash as many do, since it's fast, cheap, and reliable. Some of my CF cards are smaller than 8 GB, and if I hard code it I can't boot with those.

- I actually do run OSes on that machine that thunk out to BIOS (Win98 for example.) If I really wanted a computer for useful things, no doubt I'd run Linux on it, but all the same, I've got plenty of smaller computers with unilaterally more power (even a RPi is much faster.)

(FWIW: autodetect properly detected the right parameters, it just locked up on boot. My guess is it was some kind of simple integer overflow bug or something.)


> It was not uncommon to have a small boot partition at the beginning of the disk for that purpose.

still had to that a few weeks ago because on a dell r720 grub did not want to boot my zfsonlinux rootfs on a 6tb pool - it's either the dell hba controller, grub oder some other limitation but once you go beyond 2tb or 4tb disk I always run into strange behavoir.

Maybe UEFI solves that, I don't know.


EFI doesn't solve it directly, but GPT does. Regular MBR can't handle more than 2tb disks. Of course when running EFI you always want to use GPT. Linux might be able to handle >2tb MBR somehow, but in doing so it might be confusing your BIOS perhaps.


Hard drive makers used to include a disk in the box that would fix these limitations.

Western Digital called their program EZBIOS, see if you can scrape a copy of that up somewhere.


Actually, the BIOS would happily let me enter the parameters, it just locked up when iterating the disks if there were too many cylinders.


Wow, I remember those disks! I don’t think I ever had to actually use one.


I remember running into this back in the day on one of my Windows NT boxes. Knowing that NT only uses the BIOS during the boot process, I just installed NT on a partition that was below the 8GB limit. Once the system booted the HAL communicated with the hardware directly and not through the BIOS. So it saw the full size of the drive, and I was able to create a second partition to use the full drive.


Yep this was one of the reasons that a boot partition for linux and other operating systems became some common back in the early 2000s. Disks outpaced the hardware and a lot of them didn't support reading from further in on the disk. Operating systems could use drivers to talk directly to the hardware and then get around that limitation once booted but everything they needed to that point needed to be available early on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: