I was doing an update and it said that the drive was full.
Here is df -h:
Filesystem Size Used Avail Use% Mounted on 78G 2.7G 72G 4% /
none 242M 184K 242M 1% /dev
none 247M 0 247M 0% /dev/shm
none 247M 48K 247M 1% /var/run
none 247M 0 247M 0% /var/lock
none 247M 0 247M 0% /lib/init/rw
/dev/sda1 228M 225M 0 100% /bootHere is fdisk -l:
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00035711 Device Boot Start End Blocks Id System
/dev/sda1 * 1 32 249855 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 32 1045 8136705 5 Extended
/dev/sda3 1045 10444 75498496 83 Linux
/dev/sda5 32 1045 8136704 8e Linux LVMHere is mount:
/dev/mapper/sprintsftp-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/dev/sda1 on /boot type ext2 (rw)Here is /etc/fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
/dev/mapper/machine-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=08ddfaa4-0da2-405e-95b2-b228a95dc761 /boot ext2 defaults $
/dev/mapper/machine-swap_1 none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0How can I fix /dev/sda1 being mounted on /boot?
72 Answers
As I said in my comment, perhaps your only issue a full /boot partition. Since you're using LVM, the output you posted in fstab, mount, etc look fine.
And the best way to free space in /boot is to remove old kernel versions. If you're new to this, I suggest using Software Center:
Search for "linux-image". Take a note on the most recent one (your current kernel), and delete all the previous ones. Only remove the ones with version numbers like
linux-image-3.2.0-xx-server, do not delete the "main"linux-image-serverorlinux-image. Be sure to keep your current version number.Do the same with "linux-headers"
You're good to go!
Also, as a side-note... you may reconsider administering an Ubuntu Server if you still need tutorials for managing disk space and kernel versions. Have you tried Ubuntu Desktop?
It seems you have two issues.
Your /boot partition is full - this happens if you update the kernel enough times and don't remove the old ones. It's always a good idea to keep at least one old kernel in case you find something that doesn't work with an updated one, but you don't need more than three. Do you boot with GRUB? How many kernel options do you get?
There is no entry for your root (i.e. the top entry, "/") filesystem location. It shows that you have an 80GB partition but it doesn't show a mount point. Here's mine:
df -h:
kevin@nx-6325:# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 20G 9.6G 8.8G 53% /
none 434M 280K 434M 1% /dev
none 438M 252K 438M 1% /dev/shm
none 438M 208K 438M 1% /var/run
none 438M 0 438M 0% /var/lock
none 438M 0 438M 0% /lib/init/rw
/dev/sda2 244M 197M 35M 86% /boot
kevin@nx-6325:#As you can see, my root filesystem is mounted on /dev/sda5. Also note that my /boot has 197M used - that's with 4 kernels as I've been too lazy to delete the old ones!
Please post the output of
sudo fdisk -land also
mountIf you need help deleting old kernels post back or Google it - it's very easy to do.
EDIT: As stated by MestreLion, it looks like your only issue is a full /boot partition which is preventing you from upgrading the kernel. To remove all but the two latest, I use the script below. The original is here: and all I've done is add a couple of lines so that you are advised what will be kept and what will be deleted before it starts (and you also have the option to abort if you wish). I have just run it now, to delete the three oldest out of five kernels and it ran fine.
Copy all the text below and save it in your home directory as purge-kernel.sh:
#!/bin/bash
# Get a list of the kernels that are installed.
kernelList=$(cd /;ls boot/vmlinuz*)
# Make a list of the kernels to keep. These are the kernels linked to by /vmlinuz,
# /vmlinuz.old, and the currently running kernel.
keepList="$(readlink -q /vmlinuz) $(readlink -q /vmlinuz.old) boot/vmlinuz-$(uname -r)"
# Change the list of file names to list of package names.
kernelPkg=$(sed 's@boot/vmlinuz-@linux-image-@g' <<<$kernelList)
keepPkg=$(sed 's@boot/vmlinuz-@linux-image-@g' <<<$keepList)
# Create a list of packages to purge. This is the list of installed kernels with the kernels
# to keep removed.
purgePkg=${kernelPkg}
for keep in $keepPkg
do eval purgePkg=\${purgePkg/$keep}
done
purgePkg=$(echo $purgePkg) # Remove extra white space
echo -ne "\nWill keep the following kernels: $keepPkg\n\n"
echo -ne "Will remove the following kernels: $purgePkg\n\n"
read -p "Press enter to continue, <Ctrl>-C to abort..."
# If there are any kernels to remove then purge them and update grub;
if [ -n "${purgePkg}" ]
then
tmpfile=$(mktemp)
chmod +x $tmpfile
echo "dpkg --purge ${purgePkg};update-grub"
echo "dpkg --purge ${purgePkg};update-grub" > $tmpfile
sudo -s $tmpfile
sleep 1 # following 'rm' fails otherwise.
rm -f $tmpfile
else
echo "No kernels to purge."
fi
exitMake it executable by running
chmod +x purge-kernel.shRun it by opening a terminal in your home directory with:
./purge-kernel.shYou will be prompted for your password as the script requires root privileges.
1