Oracle Cloud’s free tier comes with a very generous number of VM instances and 200GB storage quota. However, spreading the storage across several VMs while running Docker builds and containers can quickly run out of available space.
In this post, I will discuss several tactics to free up space in OCI so the available storage quota can be used more efficiently.
We will
- reclaim unused space in our VM’s provisioned boot volume - most common and quickest win
- terminate other idle Compute instances, freeing up their attached storage, and expand our VM to use the free-up space
- do some household cleaning inside the VM to remove unnecessary files
=Wasted space beyond Linux image
This is the number one issue that causes an unaware user running out of space fast. It is also the easiest fix that will immediately unlock free space for us. So let’s start with this one first!
When I created
a Compute instance and attached a 100GB boot volume to it, I was expecting the whole 100GB to be available for use. However, when I run df -h
. I see only a 30G root volume
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ocivolume-root 30G 23G 7.1G 77% /
/dev/sda2 2.0G 854M 1.2G 44% /boot
/dev/mapper/ocivolume-oled 15G 1.1G 14G 8% /var/oled
/dev/sda1 100M 7.5M 93M 8% /boot/efi
Running lsblk
to see total size of the physical disk, block devices and partitions, I see
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 100M 0 part /boot/efi
├─sda2 8:2 0 2G 0 part /boot
├─sda3 8:3 0 44.5G 0 part
│ └─ocivolume-root 252:0 0 29.5G 0 lvm /
└─ocivolume-oled 252:0 0 15G 0 lvm /var/oled
So I do have a 100 GB boot volume, but only 44.5 GB of the 100 GB disk is actually partitioned and in a logical volume when the system was set up. Of that, 29.5 GB was allocated to root /
and 15 GB to /var/oled
. The remaining ~55 GB of the disk is currently unpartitioned and unused.
Where’s the remaining space?
When you create a VM and choose a boot volume size larger than the default for the chosen Linux OS image (typically around 50GB for Oracle Linux images), OCI does not automatically expand the partitions and filesystems inside the guest OS to fill the extra space. It just provisions a bigger virtual disk at the block level. Inside the VM, the layout is whatever the base image had, which often leaves significant unallocated space beyond the existing partitions.
Oracle Linux 9 on OCI uses LVM (Logical Volume Manager) for the root volume by default, with the logical volume (LV) named root
in volume group (VG) ocivolume
, initially using a single physical volume (PV) on partition sda3
. The image typically:
- create a small EFI partition (
/boot/efi
) -sda1
in my case - create a
/boot
partition -sda2
in my case - creates a single LVM partition (
sda3
) sized to the default image requirements (around 46GB for a 50GB boot volume, though the usable root LV is often ~35GB after overhead) - allocates the root filesystem (
/
) at/dev/mapper/ocivolume-root
with around 30-35GB of space to fit the Oracle Linux distro (as seen from fromdf -h
andlsblk
in my VM above) - leaves the remaining space on larger boot volumes unpartitioned after
sda3
, allowing for future expansion without reinstalling
So our objective now is to extend the root into the unpartitioned space. We will use a safe approach that:
- does not touch existing partitions (
sda1
,sda2
,sda3
) - only creates a new partition in the unused space and adds it to LVM as an additional PV
- grows the XFS file system online without unmounting
/
Check disk and LVM layout
To see the full layout, run
sudo vgs
sudo lvs
vgs
: See total size (VSize
column) and unallocated free space (VFree
column) in the volume groupVG #PV #LV #SN Attr VSize VFree ocivolume 2 1 0 wz--n- <107.00g 57.30g
lvs
: See logical volume for the root volumeocivolume-root
and its current sizeLV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root ocivolume -wi-ao---- <107.00g
Now we have confirmed that we do have 55GB remaining free space. Let’s walk through extending the root /
filesystem so it can use the rest of the 100 GB boot volume.
Reclaim free unpartitioned space
We will
- Create a new partition in the 55 GB unused free space
- Add it to our
ocivolume
volume group - Extend
ocivolume-root
and grow the root/
filesystem into it
1. Confirm free space location
$ sudo parted /dev/sda print free
Look for the Free Space
entry after sda3
. Note the start value.
(parted) print free
Model: ORACLE BlockVolume (scsi)
Disk /dev/sda: 107GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 106MB 105MB fat16 EFI System Partition boot, esp
2 106MB 2253MB 2147MB xfs
3 2253MB 50.0GB 47.8GB lvm
4 50.0GB 107GB 57.3GB Free Space
This confirms exactly what we suspected: there’s 57.3 GB of completely unallocated space at the end of the disk. Since there’s already a partition 4 (/dev/sda4
) in that free space., we just need to bring it into LVM and grow the root filesystem so /
can use it.
2. Create a new LVM partition
Create a new logical volume partition in that free space. Replace <start>
with the exact start value shown for the free space in print free
. In our case, it is 50.0GB
.
(parted) mkpart primary <start> 100%
(parted) set 4 lvm on
(parted) quit
100%
means “go to the end of the disk.”set 4 lvm on
marks the new partition (/dev/sda4
) for LVM use.
3. Add new partition to Volume Group
Added the unused 57 GB of disk space to LVM. Back at the shell prompt:
sudo pvcreate /dev/sda4
sudo vgextend ocivolume /dev/sda4
pvcreate
makes the new partition an LVM physical volume.vgextend
adds it to the existing volume group (ocivolume
).
The output is:
Physical volume "/dev/sda4" successfully created.
Not creating system devices file due to existing VGs.
Volume group "ocivolume" successfully extended
Now /dev/sda4
is part of our ocivolume
volume group.
4. Extend root logical volume
The last steps are to give all that new space to our root logical volume and then grow the filesystem so /
can use it immediately.
sudo lvextend -l +100%FREE /dev/mapper/ocivolume-root
-l +100%FREE
allocates all available free space in the VG to the LV.
The output confirms the LV size increase.
File system xfs found on ocivolume/root mounted at /.
Size of logical volume ocivolume/root changed from <30.00 GiB to <87.90 GiB.
5. Grow the filesystem
This step makes the filesystem actually use the new LV size. Oracle Linux 9 uses XFS, which can grow online, so no reboot is needed.
sudo xfs_growfs /
6. Verify
Running df -h /
should now show /
with roughly 86–87 GB total (29.5 GB original + 57.3 GB new).
Now that we have reclaimed unused space, let’s do further clean up for existing space hogs in the root volume.
Terminate unused VMs
If you have any provisioned VM instances that run experimental apps that you seldom use, terminating them can help you reclaim free storage used up by their boot volumes.
When an OCI VM is terminated, the associated boot volume is is not reclaimed immediately unless you choose to delete it during termination. If you choose to preserve the boot volume, you can use it to launch a new instance or attach it to another instance as a data volume.
- Login to OCI
- Click Compute Instances
- Locate the instance to delete. Click the
...
dots on the right and select Terminate. - In the dialog box, you can check to
Permanently delete the attached boot volume
. If you do not need the data (to be attached to another VM instance), check this box.
After termination, the storage used by this boot volume is immediately free up. You can use it to expand the boot volume of another VM instance, which we will visit in the next section.
Resize existing volume
The OCI Free Tier comes with 200GB total free block storage. If you have free space remaining, you can resize your boot volume to use up that space. Best yet, boot volumes can be resized online without stopping the instance so any running service or container is not interrupted.
In this section, I’m going to walk you through expanding a 100GB boot volume to 150GB.
Resize in OCI Console
To resize the boot volume in OCI Console :
- Log in to the OCI Console.
- Navigate to Compute > Instances.
- Select your VM instance under the right Compartment.
- Navigate to Storage > Boot volume.
- Click the boot volume name.
- On the upper right, click Edit button.
- In the edit panel, for Volume size (in GB), enter the new volume size.
- Click Update.
A dialog opens that lists the rescan commands you need to run after the volume is provisioned.
The resize happens online; wait a few minutes for it to complete (check status in the console).
Rescan the disk
After the volume is provisioned, rescan the disk, so that the operating system identifies the expanded volume size.
- SSH into the VM instance.
- Rescan with the following command
sudo dd iflag=direct if=/dev/sda of=/dev/null count=1
echo "1" | sudo tee /sys/class/block/sda/device/rescan
result
1+0 records in
1+0 records out
512 bytes copied, 0.00103136 s, 496 kB/s
1
Extend root partition
Finally, for the volume resize to take effect, you need to extend the root partition of the instance to fully use the newly expanded boot volume.
Install required tools if not present:
sudo dnf install -y cloud-utils-growpart xfsprogs
(growpart for partitions, xfsprogs for XFS filesystem, which is default on Oracle Linux 9).Identify the boot partition with
lsblk
.NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 150G 0 disk ├─sda1 8:1 0 100M 0 part /boot/efi ├─sda2 8:2 0 2G 0 part /boot ├─sda3 8:3 0 44.5G 0 part │ └─ocivolume-root 252:0 0 97.9G 0 lvm / └─sda4 8:4 0 53.4G 0 part └─ocivolume-root 252:0 0 97.9G 0 lvm /
The
lsblk
output above shows the disk is now 150G, but the existing partitions total 97.9G, leaving ~50G unallocated at the end aftersda4
.We’ll extend the last partition (
sda4
), resize its PV, extend the LV to use the new space, and grow the XFS filesystem.Extend last partition (
sda4
) to consume the unallocated space:$ sudo growpart /dev/sda 4 CHANGED: partition=4 start=97726464 old: size=111986688 end=209713151 new: size=216846303 end=314572766
Verify with
lsblk
.$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 150G 0 disk ├─sda1 8:1 0 100M 0 part /boot/efi ├─sda2 8:2 0 2G 0 part /boot ├─sda3 8:3 0 44.5G 0 part │ └─ocivolume-root 252:0 0 97.9G 0 lvm / └─sda4 8:4 0 103.4G 0 part └─ocivolume-root 252:0 0 97.9G 0 lvm /
sda4
is now 103.4G (original 53.4G + 50G).Resize the physical volume on sda4 to recognize the new partition size:
$ sudo pvresize /dev/sda4 Physical volume "/dev/sda4" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
Check with
sudo pvs
.$ sudo pvs PV VG Fmt Attr PSize PFree /dev/sda3 ocivolume lvm2 a-- 44.50g 0 /dev/sda4 ocivolume lvm2 a-- <103.40g 50.00g
The PV on sda4 now shows the increased size, and the VG ocivolume has 50G free space.
Extend the logical volume and filesystem:
sudo lvextend -l +100%FREE -r /dev/ocivolume/root
-l +100%FREE
allocates all available free space in the VG to the LV.-r
automatically resizes the XFS filesystem after extending the LV.
$ sudo lvextend -l +100%FREE -r /dev/ocivolume/root File system xfs found on ocivolume/root mounted at /. Size of logical volume ocivolume/root changed from <97.90 GiB (25062 extents) to <147.90 GiB (37862 extents). Extending file system xfs to <147.90 GiB (158804738048 bytes) on ocivolume/root... xfs_growfs /dev/ocivolume/root meta-data=/dev/mapper/ocivolume-root isize=512 agcount=14, agsize=1933312 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 = exchange=0 data = bsize=4096 blocks=25663488, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 25663488 to 38770688 xfs_growfs done Extended file system xfs on ocivolume/root. Logical volume ocivolume/root successfully resized.
Verify the changes:
$ lsblk df -h / NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 150G 0 disk ├─sda1 8:1 0 100M 0 part /boot/efi ├─sda2 8:2 0 2G 0 part /boot ├─sda3 8:3 0 44.5G 0 part │ └─ocivolume-root 252:0 0 147.9G 0 lvm / └─sda4 8:4 0 103.4G 0 part └─ocivolume-root 252:0 0 147.9G 0 lvm / Filesystem Size Used Avail Use% Mounted on /dev/mapper/ocivolume-root 148G 88G 60G 60% /
The root LV is now ~147.9G (original 97.9G + 50G), and
df -h
should show the increased available space on/
.
Mission accomplished!
Household cleaning
After running a system for awhile, a lot of cache, temp files and unused packages start to fill up valuable space quickly. Let’s take a look at the biggest space hoarders.
Before making changes, always back up important data.
Check disk usage
Start by checking current disk usage to identify what’s consuming space. Find the biggest directories on /
:
$ sudo du -xh --max-depth=1 / | sort -hr | head -10
22G /
101G /home
37G /var
5.1G /usr
506M /opt
26M /etc
36K /root
-x
→ stay on the same filesystem (don’t follow mounts like/var/oled
)--max-depth=1
→ only top‑level directoriessort -hr
→ sort by size, largest first
This will quickly tell you if the culprit is /usr
, /var
, /home
, or something else. If, for example, /usr
is huge, run
sudo du -xh --max-depth=1 /usr | sort -hr | head -20
Repeat until you find the biggest subdirectories. For example, drilling down into /var/lib
in my machine:
$ sudo du -xh --max-depth=1 /var/lib | sort -hr | head -20
9.4G /var/lib
9.0G /var/lib/docker
420M /var/lib/rpm
18M /var/lib/selinux
I can see that
/var/lib/docker
→ 9.0G of container images, volumes, build cache/var/lib/rpm
at 420 MB. On Oracle Linux (and other RPM‑based distributions like RHEL, CentOS, Fedora, SUSE), the RPM database is the central record‑keeping system for installed software. The package manager (rpm
,dnf
,yum
) relies on it to know what’s installed and how to upgrade or remove it. If it’s corrupted, you can’t reliably install, update, or remove packages. So it cannot be removed.
Thus our major cleanup target is Docker data — images, volumes, and build cache.
Common space hoarders
Here are the usual suspects and how to clean them in a quick glance:
Directory | What is inside | Cleanup |
---|---|---|
/var/lib/docker | Images, volumes, build cache | docker system prune -a --volumes |
/var/cache/dnf | Package manager cache for installs/updates | sudo dnf clean all |
/var/log | System & service logs | sudo journalctl --vacuum-time=7d\ and truncate large logs |
/tmp & /var/tmp | Build leftovers, temp files | sudo rm -rf /tmp/* /var/tmp/* |
/home/ | Large downloads, build artifacts | Manual check & delete |
So our cleanup routine is:
- Prune Docker (biggest win)
- Clean package manager cache and remove unused packages
- Vacuum logs
- Clear temp dirs
- Remove old kernels if present
Clean Docker storage
Docker tends to accumulate dangling images, stopped containers, old build cache, and unused volumes. Specifically, Docker builds can fill space very quickly, especially if you’re building large images or keeping multiple versions.
We can specifically check Docker’s storage (usually /var/lib/docker) withsudo du -h --max-depth=1 /var/lib/docker | sort -hr
This will show which subdirectories (images, containers, volumes, buildkit) are largest:
39G /var/lib/docker
28G /var/lib/docker/overlay2
11G /var/lib/docker/volumes
5.7M /var/lib/docker/image
2.1M /var/lib/docker/containers
884K /var/lib/docker/buildkit
80K /var/lib/docker/network
0 /var/lib/docker/tmp
0 /var/lib/docker/swarm
0 /var/lib/docker/runtimes
0 /var/lib/docker/plugins
Our case is a typical scenario
directory | what is it | how to fix |
---|---|---|
overlay2/ | image layers and container filesystems. | docker system prune -a deletes ALL stopped containers, unused images (dangling + unreferenced), networks, and build cache. You’ll be prompted to confirm — typey |
volumes/ | persistent container data. If you have old volumes from containers you no longer use, they can be safely removed. | docker volume ls to check, then docker volume prune to remove unused mounted volumes and delete data. |
image/ | images pulled locally | docker image prune removes dangling images (no tag, not used by any container) while docker image prune -a removes ALL unused images |
buildkit/ | If you docker build a lot of images, the build cache can be huge. | docker builder prune -a . Rebuild Docker images with --no-cache flag next time to avoid accumulation. |
containers/ | container logs and metadata | To find the biggest logs: |
sudo find /var/lib/docker/containers/ -name "*-json.log" -size +100M -exec ls -lh {} \;
To truncate them without stopping containers:
sudo truncate -s 0 /var/lib/docker/containers/*/*-json.log
``` |
If you want more control to do targeted cleanup:
- remove stopped containers: `docker container prune`
- remove specific unused images: List them with `sudo docker images`, then remove with `sudo docker rmi <image_id>`.
Use `docker system df` routinely to monitor usage before it becomes critical
```bash
docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 9 7 5.993GB 273.1MB (4%)
Containers 9 7 16.6GB 16.6GB (99%)
Local Volumes 2 2 11.52GB 0B (0%)
Build Cache 9 0 0B 0B
Best practices
Here are several best practices to prevent Docker from taking up massive space:
- Use a separate volume for
/var/lib/docker
so Docker data doesn’t fill the boot volume:
sudo systemctl stop docker
sudo mv /var/lib/docker /mnt/bigdisk/docker
sudo ln -s /mnt/bigdisk/docker /var/lib/docker
sudo systemctl start docker
- For builds,
- Limit image layers in your Dockerfiles (combine RUN commands, clean up temp files)
- Use smaller base images and multi-stage builds
- Regularly prune unused Docker data via weekly cron job.
Clean packages
Oracle Linux uses dnf
, which caches RPMs after installs and updates. We can check its size by
$ sudo du -sh /var/cache/dnf
727M /var/cache/dnf
To clean the DNF cache to remove downloaded package files, metadata, etc.:
$ sudo dnf clean all
57 files removed
To remove unused packages that were automatically installed as dependencies but are no longer needed.
$ sudo dnf autoremove -y
Last metadata expiration check: 3:41:48 ago on Fri 26 Sep 2025 02:48:29 PM GMT.
Dependencies resolved.
Nothing to do.
Complete!
Clear logs
Logs can grow large over time, especially with Docker and system services. To check size:
$ sudo du -sh /var/log/* | sort -hr
902M /var/log/pcp
161M /var/log/messages
61M /var/log/secure
33M /var/log/audit
18M /var/log/btmp
....
To truncate large logs without deleting files:
sudo journalctl --vacuum-time=2weeks
This removes systemd logs older than 2 weeks
$ journalctl --vacuum-time=2weeks
Deleted archived journal /run/log/journal/......
Vacuuming done, freed 95.6M of archived journals from /run/log/journal/35f53ea433fd46a983fa9205384a7313.
Vacuuming done, freed 0B of archived journals from /run/log/journal.
Clear temp files
Build processes and apps often leave junk in Temporary Files (/tmp
and /var/tmp
). They are usually safe to delete, but ensure no running processes need them.
sudo rm -rf /tmp/*
sudo rm -rf /var/tmp/*
If you have user-specific caches (e.g., from pip or other tools), run rm -rf ~/.cache/*
Conclusion
Reboot if needed to ensure changes take effect: sudo reboot
After these steps, re-run df -h
to check freed space.
Comments