This post walks you through building a minimal Django project locally and deploying it to an Oracle Cloud Compute instance hosted in the very generous Always Free tier. We’ll use the default Oracle Linux image, Gunicorn, and Nginx for production hosting.

Here’s a rundown of what we will do in this post:

  1. Sign up for Oracle Free Tier
  2. Setup an Oracle Compute instance
  3. Setup network connectivty for this Compute instance to accept incoming traffic
  4. Connect to the Compute instance through SSH
  5. Clone an existing Django app to the VM
  6. Setup Django environment and Gunicorn (a WSGI application server) to interface with our Django app
  7. Setup Nginx to pass appropriate incoming HTTP requests to Gunicorn
  8. Test and troubleshoot any errors that arise. I had a lot of hiccups along the way, that’s why I decided to document this process so you can have a smoother process than I did. I will also list alternate steps for using a Ubuntu image.

Setup an Oracle Cloud Compute VM

This is a shortened version of the official tutorial to create a VM instance.

provision a VM

  1. Sign up for a free tier account.
  2. Log in to Oracle Cloud console. In the OCI Console, navigate to Compute > Instances > Create Instance.
  3. Enter a name, select an Always Free–eligible region and compartment.
  4. Under Image and Shape, click Change shape, pick Ampere, then VM.Standard.A1.Flex.
    • Adjust the OCPUs and memory sliders (e.g. 4 OCPUs / 24 GB to stay always free).
  5. Click Change Image, Choose from a list of Always Free–eligible image that is compatible with the ARM platform. In our case, accept the default Oracle Linux 9, or any of the supported Ubuntu images.

    note: Only platform images published by Oracle after the A1 shape release are compatible without custom modifications. Custom images can also be used if you add and have shape‐compatibility entries added. you can also check out a performance benchmark for the available distros here.

  6. Under Networking, choose: Create new VCN
    • In the VCN wizard: Give it a name Check Assign public IPv4 address for the subnet it creates
  7. Click “Generate key pair” and download the private .key file locally.
  8. Under Boot volume, leave defaults (50 GB) or increase to 200 GB total across all free-tier instances.
  9. Click Create. Wait for the instance status to turn to Running (green dot).

Now that the instance is running, let’s setup network connectivity so we can connect to it externally.

setup connectivity

In this step, we will retrieve a public IP address, setup security lists and firewall rules to enable inbound traffic on the required ports.

retrieve public IP

  1. Click the instance’s name from the Instances list.
  2. Copy the Public IP address from the instance details.

setup ingress rules

To let inbound SSH and HTTP traffic reach the VM, we must add ingress rules for TCP ports 22 and 80 on the security list attached to the public subnet.

  • In the OCI Console, click the ☰ menu → Networking → Virtual Cloud Networks
  • Select the VCN the your instance uses.
  • In the VCN’s Resources panel, click Subnets.
  • Find and click the instance’s public subnet name.
  • On the Subnet details page, in the Security section, click the linked name (e.g., default security list).
  • click Security rules
  • click Add Ingress Rule (for SSH - Port 22)
    • Source CIDR: 0.0.0.0/0 (open to all)
    • IP Protocol: TCP
    • Source Port Range: Leave blank (all ports)
    • Destination Port Range: 22
    • Click Add Ingress Rule to save
  • Add Ingress Rule for HTTP (Port 80)
    • click Add Ingress Rule again
    • Source CIDR: 0.0.0.0/0 (open to all)
    • IP Protocol: TCP
    • Source Port Range: Leave blank
    • Destination Port Range: 80
    • Click Add Ingress Rule to save

setup firewall rules

Next we will open the firewall on this VM to allow incoming connections. Even after we setup ingress rules in the OCI console, without these next steps, we cannot access it with traffic other than SSH (which is enabled by default).

Oracle Linux

Oracle Linux ships with firewalld. We need to enable and start firewalld, then open the firewall ports for the NGINX web service (80) and SSH.

sudo systemctl enable --now firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload

Verify rule by sudo firewall-cmd --list-all.

Ubuntu

For Ubuntu, we need to setup iptables in the following steps:

note: I saw mentioning of using ufw to edit firewall rules in Ubuntu but I shy away from it after reading the official documentation

  1. Insert SSH-Allowing Rules Before Dropping Traffic

    Always add rules to permit SSH (and any other essential service) first, then change the default policy. For example:

    # 1. Allow SSH
    sudo iptables -A INPUT -p tcp --dport 22 -m conntrack \
     --ctstate NEW,ESTABLISHED -j ACCEPT
    
    # 2. Allow loopback and established traffic
    sudo iptables -A INPUT -i lo -j ACCEPT
    sudo iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    
    # 3. Allow HTTP
    sudo iptables -A INPUT -p tcp --dport 80 -m conntrack \
     --ctstate NEW -j ACCEPT
    
  2. Use iptables-apply for Automatic Rollback Debian/Ubuntu provide iptables-apply, which tests new rules and reverts them if you don’t confirm within a timeout:

    sudo apt install netfilter-persistent  # includes iptables-apply
    sudo iptables-apply /etc/iptables/rules.v4
    

    It installs your candidate rules, gives you a 10s window to confirm. If you lose connectivity, it automatically rolls back so you aren’t locked out.

  3. Drop everything else

    sudo iptables -P INPUT DROP
    sudo iptables -P FORWARD DROP
    sudo iptables -P OUTPUT ACCEPT
    

    This way, SSH traffic is explicitly permitted before the DROP policy takes effect.

  4. Persist rules

    Once tested and confirmed working: sudo netfilter-persistent save This writes to /etc/iptables/rules.v4 so your policy survives reboots.

Verify on reboot with: sudo iptables -L -v

test

For SSH, run ssh -i PATH/TO/PRIVATE_KEY opc@YOUR_VM_PUBLIC_IP (for Oracle Linux) or ubuntu@@YOUR_VM_PUBLIC_IP (for Ubuntu). If you’re connecting to this instance for the first time, you need to accept the fingerprint of the key. To accept the fingerprint, type yes and press Enter.

Now we have full shell access to an Ampere A1 Flex VM instance.

Install dependencies

Next we need to update and install packages to host our Django app in the VM. Oracle Linux uses DNF/YUM instead of APT. If you are using Ubuntu, replace dnf with apt.

sudo dnf update
sudo dnf install -y python3 python3-pip git nginx

In Ubuntu, do an extra install of sudo apt install -y python3-venv as Ubuntu’s packaging separates the pip/venv bits into python3-venv, while Oracle Linux bundles venv inside the main python3 package.

Create Django project locally

I assume you already have a working Django project. If not, we’ll just create a demo helloworld project in a local desktop machine.

  1. install Django and creates a project named helloworld in the current directory (.). We then initiate a single app named web right alongside manage.py

    django-admin startproject helloworld .
    python manage.py startapp web
    

    Our file directory now looks like this

    .
    ├── manage.py
    ├── helloworld/
    │   ├── __init__.py
    │   └── settings.py
    └── web/
        ├── __init__.py
        ├── models.py
        └── views.py
    
  2. In helloworld/settings.py, add our new web app:

    INSTALLED_APPS = [
        ...,
        'web',
    ]
    
  3. Define a simple view in web/views.py:

    from django.http import HttpResponse
    
    def home(request):
        return HttpResponse("Hello World!")
    
  4. Wire the view to a URL in helloworld/urls.py:

    from django.urls import path
    from web.views import home
    
    urlpatterns = [
        path('', home, name='home'),
    ]
    
  5. Now our app is ready to use. Let’s test locally:

    python manage.py migrate
    python manage.py runserver
    

    Visit http://127.0.0.1:8000/ to see Hello World!

  6. Now we have a working demo app. Let’s create a requirements.txt:

    pip freeze > requirements.txt

    so we can rebuild the environment in the VM.

  7. We can then push everything to an existing Github repository, to be cloned in our Oracle VM.

    • initialize repository: git init
    • exclude files from sync by creating a .gitignore file with the following content
      .venv
      *.pyc
      db.sqlite3
      
    • commit and push changes
      git add *
      git commit -m "initial checkin"
      git remote add origin PATH.TO.GITHUB
      git push -u origin master
      

Deploy Django project on the VM

  1. In our SSH session, clone your Github repo with git clone PATH.TO.GITHUB.REPO to your \home\user folder

  2. In the project root folder, create and activate a virtual environment, then install dependency requirements:

    python3 -m venv venv
    source venv/bin/activate
    pip install --upgrade pip
    pip install -r requirements.txt
    pip install gunicorn
    

    If during the installation of Django, you get error ERROR: Could not find a version that satisfies the requirement Django==5.2.4, identify the latest version in the error list, or run pip index versions Django. You’ll see the latest LTS in the 4.2.x line (e.g. 4.2.23). Update your your requirements to a supported Django release and reinstall:

    pip install --upgrade pip setuptools wheel
    pip install -r requirements.txt
    
  3. Apply migrations & collect Static.

    python manage.py migrate
    python manage.py collectstatic --noinput
    
  4. Set public IP in ALLOWED_HOSTS

    • Open settings.py in our project’s config folder sudo nano /home/opc/ociProj/helloworld/settings.py

    • Locate the ALLOWED_HOSTS line. Replace it with our public IP (and any other hosts you need):

      ALLOWED_HOSTS = ['PUBLIC_IP', '127.0.0.1', 'localhost']

Configure Gunicorn server

We will now install Gunicorn, a WSGI application server that runs our Django app to produce dynamic content, and passes the response to Nginx.

  1. Confirm our project root: pwd

  2. Check that Gunicorn is installed and note its path: which gunicorn

  3. Confirm the exact path and permissions of your Gunicorn binary ls -l PATH/TO/GUNICORN_BINARY We should see something like:

    -rwxr-xr-x 1 opc opc … PATH/TO/GUNICORN_BINARY

    If the “x” bits are missing, systemd (running as user opc) can’t execute it. Fix the permissions if needed by

    sudo chmod 755 PATH/TO/GUNICORN_BINARY

  4. To keep Gunicorn running in the background continuously, we have to set it up as a systemd service. To do so, place a service definition at /etc/systemd/system/gunicorn.service.

    sudo nano /etc/systemd/system/gunicorn.service

    In this file,

    • User/Group: Runs Gunicorn as opc or ubuntu (your non-root user).
    • WorkingDirectory: our Django project root where manage.py lives, must match pwd
    • ExecStart: absolute path to the Gunicorn binary, must match which gunicorn. It specifies number of workers, socket bind, and our Django WSGI module.
    [Unit]
    Description=gunicorn daemon for django-hello
    After=network.target
    
    [Service]
    User=opc
    Group=opc
    WorkingDirectory=/home/opc/ociProj
    ExecStart=/home/opc/venv/bin/gunicorn \
        --workers 3 \
        --bind unix:/home/opc/ociProj/gunicorn.sock \
        helloworld.wsgi:application
    
    [Install]
    WantedBy=multi-user.target
    
  5. Reload systemctl daemon to pick up our new unit file

    sudo systemctl daemon-reload

  6. Enable and start Gunicorn:

    sudo systemctl enable --now gunicorn
    
    • enable ensures Gunicorn starts on boot.
    • --now starts it immediately.
  7. Verify it’s running:

    sudo systemctl status gunicorn
    

    Once we see active (running), Gunicorn is up. We will verify Gunicorn is serving our Django app on the VM by

    curl --unix-socket /home/opc/ociProj/gunicorn.sock http://localhost
    

    We should see the “Hello World” HTML.

Next, we will configure Nginx to proxy traffic to gunicorn.sock.

Configure Nginx as reverse proxy

setup Nginx

We are now ready to setup Nginx web server.

  1. Enable and start the service:

    sudo systemctl enable --now nginx

    The service starts a web server that listens on TCP port 80 by default.

  2. Confirm it’s running:

    sudo systemctl status nginx

  3. Test local access using cURL

    This test tries to access the NGINX test page from a terminal session connected via SSH to the instance where NGINX is running. This connection type verifies the service is working and bypasses any firewall rules or OCI ingress security rules.

    curl http://localhost

    We will see the Nginx default site “Test Page for the HTTP Server on Oracle Linux” or if in Ubuntu: “Welcome to nginx!. If you see this page, the nginx web server is successfully installed and working. “.

reverse proxy to Gunicorn and serve Django app

nginx-gunicorn

As seen in this diagram, any web traffic is first processed by Nginx which determines where to route the traffic to. If the request is for static files, Nginx will serve them. It will also pass appropriate incoming HTTP requests to a WSGI server, which is Gunicorn in our case.

Since Nginx comes with a default site, we need to make sure when we hit our public IP, our Django app is served instead.

By default, Nginx service listens on port 80 for all server names. Server names are defined using the server_name directive, and NGINX determines which server block to use for a given request by evaluating its configuration files.

specify Django to be the default

Create our Django configuration file for Nginx /etc/nginx/conf.d/django-hello.conf manually as follows

server {
    # specify Django to be the default
    # listen to both IPv4 & IPv6 requests for this server on port 80
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;

    # Serve static files in Django folder
    location /static/ {
        alias /home/opc/ociProj/static/;
    }

    # All other traffic goes to Gunicorn
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        # Ensure the proxy_pass line exactly matches where the Gunicorn socket lives
        proxy_pass http://unix:/home/opc/django/ociProj/gunicorn.sock;
    }
}

When Nginx receives a request in a location block that uses proxy_pass, it:

  1. knows that this block is a reverse-proxy, not a static file handler.
  2. http:// indicates HTTP protocol will be used over the socket.
  3. unix: prefix tells Nginx to use a Unix socket file instead of TCP
  4. /home/opc/.../gunicorn.sock points to the Gunicorn server bound at /home/opc/django/ociProj/gunicorn.sock. It will forward the original client request (method, headers, body) to Gunicorn
  5. Relays Gunicorn’s response back to the client

In Ubuntu, NGINX uses two folders to control what sites to serve: sites-available and sites-enabled. sites-available contains configuration files for all available sites on this server, whereas sites-enabled contains the symbolic links for each enabled site in the sites-available folder.

Therefore, we need to perform an extra step to enable our Django configuration by sym-linking it to the sites-enabled folder:

sudo ln -s /etc/nginx/conf.d/django-hello.conf /etc/nginx/sites-enabled

By doing so, it allows the main /etc/nginx/nginx.conf config file to pull it in with an include statement.

remove Nginx default site

For RHEL-based systems (including Oracle Linux), we need to comment out the built-in block in /etc/nginx/nginx.conf by finding the first server { … } block under the http { section. We can either delete it or prefix each line with #.

For Debian based distro (including Ubuntu), the default site config is in /etc/nginx/sites-available/default.conf and symbolically linked from /etc/nginx/sites-enabled/. We need to turn off the default site without touching the original file in sites-available by removing its symlink in sites-enabled. This can be done by sudo rm /etc/nginx/sites-enabled/default

After that, we can check if our configuration file was correctly written, and reload Nginx

sudo nginx -t
sudo systemctl restart nginx

nginx -t should output syntax is ok and the configuratoin file test is successful.

We can now open a browser to access our public IP. We should see the “Hello World” message properly served.

Errors

Since I’ve run into a lot of errors along the process, I decided to document each of them along with the solutions I used.

Nginx not loading Django app but default site

Some tests to try

  1. Make sure you have removed or commented out the entire port 80 server block in your main /etc/nginx/nginx.conf or any config file under /etc/nginx/default.d/ or etc/nginx/conf.d/ (except django-hello.conf).

    To locate the Default HTTP Block, run sudo nginx -T | sed -n '1,200p'. In that output, look for the first server { … } under http { and note the file path printed just above it, for example: configuration file /etc/nginx/nginx.conf:.

  2. Ensure your Django vhost in /etc/nginx/conf.d/django-hello.conf uses default_server on both IPv4 & IPv6:

    server {
      # specify Django to be the default
      # listen to both IPv4 & IPv6 requests for 
      # this server on port 80
      listen 80 default_server;
      listen [::]:80 default_server;
      server_name _;  
    

    Not

      listen      80;
      listen      [::]:80;
      server_name _;
      # this is serving the default page
      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }  
    
  3. If it still shows the default test page, double-check that no other .conf files in /etc/nginx/conf.d/ or /etc/nginx/default.d/ are defining a server block on port 80.

    You can locate every port 80 listener by running sudo grep -R "listen\s*80" /etc/nginx. Double-check you only have your Django block on 80. If you see some other configuration files showing up, either remove it like sudo rm /etc/nginx/conf.d/welcome.conf or rename it out of the way with sudo mv /etc/nginx/conf.d/welcome.conf /etc/nginx/

Nginx reports conflicts

sudo nginx -t sudo systemctl reload nginx nginx returns

[warn] conflicting server name "_" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "_" on [::]:80, ignored 

This means that two server blocks are claiming as default. Following the steps in the above section to disable the built-in block in nginx.conf.

Gunicorn: Exec Failure (Status 203/EXEC)

Starting Gunicorn gives:

Active: failed (Result: exit-code) since ....  Process: 81017 ExecStart=/home/opc/venv/bin/gunicorn --workers 3 --bind unix:/home/opc/ociProj/gunicorn.sock> Main PID: 81017 (code=exited, status=203/EXEC)

A status 203/EXEC means systemd can’t execute the binary you pointed at. On Oracle Linux with SELinux enforcing, binaries under /home are blocked by default, even if their UNIX perms are 755. Check SELinux Mode by getenforce.

If it says Enforcing, SELinux is likely denying execution under /home. We can temporarily switch to permissive to confirm:

sudo setenforce Permissive
sudo systemctl restart gunicorn
sudo systemctl status gunicorn

If Gunicorn comes up Active (running), SELinux was the culprit.

If you stick with SELinux enforcing, it’s best to keep apps under /opt or /srv.

‘/home/…/gunicorn.sock’: No such file or directory

UNIX domain sockets are ephemeral. They are created when Gunicorn starts and binds to it, and removed automatically when Gunicorn exits (or crashes). Therefore, every time your service fails or you stop Gunicorn manually, the .sock file will be deleted.

In other situations when you have no socket file (and thus Nginx can’t proxy anything), Gunicorn never actually came up, so it never bound /home/…/gunicorn.sock. We need to

  • confirm the correct WSGI module path,
  • fix systemd unit
  • make sure no permission problem
  • test Gunicorn manually
  • restart
  1. Check Your Project’s WSGI Module

    In your project root (/home/opc/ociProj), list the files:

    cd /home/opc/ociProj
    ls
    

    You should see something like:

    • manage.py
    • a subdirectory named after your Django project
    • inside that subdir, a wsgi.py

    For example, if your Django project folder is actually named helloworld, your WSGI import path will be: helloworld.wsgi:application

  2. Fix systemd Service

    sudo sed -n '1,20p' /etc/systemd/system/gunicorn.service

    make sure everything looks right, if not edit it to ensure these lines match exactly:

    WorkingDirectory=/home/opc/ociProj
    ExecStart=/home/opc/venv/bin/gunicorn \
     --workers 3 \
     --bind unix:/home/opc/ociProj/gunicorn.sock \
     helloworld.wsgi:application
    

    Replace helloworld.wsgi:application with whatever your actual WSGI path is from step 1.

  3. Test Gunicorn Directly

    Try running Gunicorn on the command line:

    gunicorn helloworld.wsgi:application \
     --workers 3 \
     --bind unix:/home/opc/ociProj/gunicorn.sock
    

    If there is no errors, then Gunicorn starts up and you get something like:

    [2025-07-12 01:00:00 +0000] [12345] [INFO] Listening at: unix:/home/opc/ociProj/gunicorn.sock
    

    If you still get Permission denied, or ImportError complaining about No module named …, note the message. If it starts cleanly, hit Ctrl+C to stop it.

  4. Reload & Start Gunicorn

    sudo systemctl daemon-reload
    sudo systemctl restart gunicorn
    sudo systemctl status gunicorn
    

    If it’s active (running), check the socket again:

    ls -l /home/opc/ociProj/gunicorn.sock

    Once the socket exists and Gunicorn is running, reload Nginx:

    sudo nginx -t
    sudo systemctl reload nginx
    

400 Bad Request

IF curl --unix-socket /home/ubuntu/ociProj/gunicorn.sock http://localhost/ works but curl -i http://127.0.0.1/ does not, make sure to add localhost and 127.0.0.1 to ALLOWED_HOST in settings.py.

502 Bad Gateway

It means Nginx can’t reach Gunicorn socket even though it exists. Let’s debug the socket and Gunicorn service.

  1. Check the Nginx Error Log

    First, let’s see exactly why Nginx is refusing the socket by examining the last couple of lines in the error log: sudo tail -n 20 /var/log/nginx/error.log

    Look for messages like “connect() to unix:/home/…/gunicorn.sock failed (13: Permission denied)” when connecting to the Unix socket, or “No such file or directory” for the socket path. That message tells us whether it’s a traversal permmission issue or a typo in the path.

    The traversal permmission issue is a very common cause of the 502 Bad Gateway problem, as it happened initially to both my Oracle Linux and Ubuntu VM images. This usually means Nginx’s user (www-data on Ubuntu) can’t traverse into the home directory where the socket lives. Unix socket permission checks include the execute bit on every parent directory.

    Let’s say our socket is at /home/ubuntu/ociProj/gunicorn.sock. Even if this file is read/writable by others (everyone other than the file’s owner and users in the file’s group), every parent directory must be “searchable” (x) by others that include the Nginx worker user. If /home/ubuntu or /home/ubuntu/ociProj isn’t searchable by others, www-data can’t reach the file.

    Let’s check and fix this by first showing current perms for the parent folders: ls -ld /home/opc /home/opc/ociProj. In both my Oracle Linux and Ubuntu VMs, /home/opc and /home/ubuntu do not have x for others. Once verified, we can loosen permissions on our project directory. If any lack ‘x’ for others, add it:

    sudo chmod o+x /home/opc
    sudo chmod o+x /home/opc/ociProj
    

    This ensures Nginx (running as user nginx or nobody) can enter those folders to reach the socket.

    After that, reload Gunicorn and Nginx:

    sudo systemctl restart gunicorn
    sudo systemctl reload nginx
    
  2. Verify Gunicorn Is Running

    sudo systemctl status gunicorn

    If it’s inactive or failed, view its logs: sudo journalctl -u gunicorn -n 50 --no-pager

    Look for errors binding the socket or importing your WSGI module.

    • If the service failed or you see “permission denied,” SELinux is still blocking execution. See step 4 below.
    • If it’s active, note any socket‐bind errors.
  3. Inspect the socket

    ls -l /home/opc/ociProj/gunicorn.sock

    The socket must exist and permissions should allow the opc user to read/write. E.g.: srw-rw---- 1 opc opc … gunicorn.sock If it’s missing, Gunicorn never bound it.

    If it’s there but owned by root or lacks rw for opc, fix permission by

    sudo chown opc:opc /home/opc/ociProj/gunicorn.sock
    sudo chmod 660   /home/opc/ociProj/gunicorn.sock
    
  4. Is SELinux blocking it?

    Check enforcement: getenforce

    If it’s Permissive, SELinux isn’t your blocker right now. If it’s Enforcing, you’ll see exec or socket denies in /var/log/audit/audit.log or via audit2why.

  5. Label the Socket for Nginx

    On Oracle Linux, Nginx can only talk to sockets labeled httpd_var_run_t. Let’s apply that:

    sudo semanage fcontext -a -t httpd_var_run_t "/home/opc/django/ociProj/gunicorn.sock"
    sudo restorecon -v /home/opc/django/ociProj/gunicorn.sock
    sudo systemctl reload nginx
    
  6. Test Socket Connectivity Manually

    From the VM shell, bypass Nginx and hit Gunicorn directly:

    curl --unix-socket /home/opc/ociProj/gunicorn.sock http://localhost/

    You should see your “Hello World” HTML.

    If you get Permission denied, it confirms a perm/traversal issue.

    If you get Connection refused or empty, Gunicorn might not actually be listening there. Restart it and recheck.

  7. Run from your project root to ensure it binds cleanly:

    cd /home/opc/ociProj
    source ../venv/bin/activate
    gunicorn helloworld.wsgi:application \
     --workers 3 \
     --bind unix:/home/opc/ociProj/gunicorn.sock
    

    If this errors with “Permission denied” or “No module named …”, note the message and Ctrl-C.

    If it still fails, move the socket to /run by editing your Gunicorn unit to bind unix:/run/gunicorn.sock and update Nginx accordingly.

    Or use a TCP bind: –bind 127.0.0.1:8000 in Gunicorn and proxy_pass http://127.0.0.1:8000; in Nginx configuration. This bypasses all socket/perms issues.

DisallowedHost

DisallowedHost at / Invalid HTTP_HOST header: 'x.x.x.x'. You may need to add 'x.x.x.x' to ALLOWED_HOSTS.

This means that Django is rejecting requests because your public IP isn’t listed in ALLOWED_HOSTS. We can fix it by:

  1. Open settings.py in your project’s config folder sudo nano /home/opc/django/ociProj/helloworld/settings.py

  2. Locate the ALLOWED_HOSTS line. Replace it with your IP (and any other hosts you need): ALLOWED_HOSTS = ['x.x.x.x', '127.0.0.1', 'localhost'] – If you’ll use a domain later, add it here too – For quick testing you can use [’*’] (not recommended in production)

  3. Save (Ctrl+O ↵) and exit (Ctrl+X).

  4. Restart Gunicorn

    The updated settings only take effect once you reload the application server:

    sudo systemctl restart gunicorn
    sudo systemctl status gunicorn
    

    Ensure it’s active (running) without errors.

  5. Reload Nginx & test

    sudo nginx -t
    sudo systemctl reload nginx
    

Unable to access public IP - The connection has timed out

If curl -i http://127.0.0.1/ works but you still cannot access public IP in browser or via curl. We can test the followings.

  1. Verify Nginx is listening on all interfaces

    run: sudo ss -tulnp | grep ':80'

    Verify that 0.0.0.0:80 is returned. It means Nginx accepts connections on every network interface, including the public one.

  2. Open Port 80 in OCI Networking

    Oracle Cloud blocks ingress by default. You’ll need to update Security Lists in your subnet with Ingress rules for port 80. Make sure you following these steps to setup the right rules.

  3. Configure host‐level firewall rules (iptables)

    The VM’s own iptables rules might override our ingress rules. Quickly flush them to test:

    sudo iptables -L     # list rules
    sudo iptables -F     # flush all rules (test only)
    

    Then retry: curl -v http://<PUBLIC_IP>/

    If this works, it implies that networking and Nginx/Gunicorn setup were already correct, but iptables were blocking.

    sudo iptables -F flushes (clears) all existing firewall rules in the default filter table. That removed any rules that were blocking incoming HTTP (port 80). With no rules in place, Linux defaults to allowing traffic, so your browser will be able to connect to Nginx on port 80.

    However, we don’t want to run an unprotected server with no firewall rules. To re-enable a safe, persistent firewall, use the steps outlined in this previous section.