1

I am having a lot of issues configuring My Dockerized Django + PostgreSQL DB application to work on Kubernetes Cluster, which I have created using Google Cloud Platform.

How do I specify DATABASES.default.HOST from my settings.py file when I deploy image of PostgreSQL from Docker Hub and an image of my Django Web Application, to the Kubernetes Cluster?

Here is how I want my app to work. When I run the application locally, I want to use SQLITE DB, in order to do that I have made following changes in my settings.py file:

if(os.getenv('DB')==None):
    print('Development - Using "SQLITE3" Database')
    DATABASES = {
        'default':{
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR,'db.sqlite3'),
        }
    }
else:
    print('Production - Using "POSTGRESQL" Database')
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': 'agent_technologies_db',
            'USER': 'stefan_radonjic',
            'PASSWORD': 'cepajecar995',
            'HOST': ,  #???
            'PORT': ,  #???
            }
    }

The main idea is that when I deploy application to Kubernetes Cluster, inside of Kubernetes Pod object, a Docker container ( my Dockerized Django application ) will run. When creating a container I am also creating Environment Variable DB and setting it to True. So when I deploy application I use PostgreSQL Database .

NOTE: If anyone has any other suggestions how I should separate Local from Production development, please leave a comment.

Here is how my Dockerfile looks like:

FROM python:3.6

ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies 
RUN pip install -r requirements.txt

EXPOSE 8000

And here is how my docker-compose file looks like:

version: '3'
services:
  web:
    build: .
    command: python src/manage.py runserver --settings=agents.config.settings
    volumes: 
      - .:/agent-technologies
    ports: 
      - "8000:8000"
    environment:
      - DB=true

When running application locally it works perfectly fine. But when I try to deploy it to Kubernetes cluster, Pod objects which run my application containers are crashing in an infinite loop, because I dont know how to specify the DATABASES.default.HOST when running app in production environment. And of course the command specified in docker-compose file (command: python src/manage.py runserver --settings=agents.config.settings) probably produces an exception and makes the Pods crash in infinite loop.

NOTE: I have already created all necessary configuration files for Kubernetes ( Deployment definitions / Services / Secret / Volume files ). Here is my github link: https://github.com/StefanCepa/agent-technologies-bachelor

Any help would be appreciated! Thank you all in advance!

2 Answers 2

4

You will have to create a service (cluster ip) for your postgres pod to make it "accessible". When you create a service, you can access it via <service name>.default:<port>. However, running postgres (or any db) as a simple pod is dangerous (you will loose data as soon as you or k8s re-creates the pod or scale it up). You can use a service or install it properly using statefulSets.

Once you have the address, you can put it in env variable and access it from your settings.py

EDIT: Put this in your deployment yaml (example):

env:
- name: POSTGRES_HOST
  value: "postgres-service.default"
- name: POSTGRES_PORT
  value: "5432"
- name: DB
  value: "DB"

And in your settings.py

'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': os.getenv('POSTGRES_HOST'),
'PORT': os.getenv('POSTGRES_PORT'),
Sign up to request clarification or add additional context in comments.

7 Comments

I have provided my github link in the question. I have already created a service and I am storing my PostgreSQL data on PD in the each of the Nodes in the cluster. What I need is info on how to specify DATABASE.defaul.HOST when my app is running in the cluster.
Added some snippets. Try this out and let know if it works.
Thank you so much for help! Would you please explain the point of environment variable DB and the POSTGRES_HOST environment variable. I mean, when i specify "postgres-service.default", does that mean Environment variable will look for postgres-service object in "default" namespace? Once again thank you so much!
Yes - it will look for postgres in the default namespace and also the service will load-balance if you have more than one replica of the db. There are many ways to "feed" it in (or access it) actually - but this is easy and simple.
I understand . And the environment variable DB is the name of postgres contianer or?
|
1

Below are my findings:

  1. The postgres instances was depending on a persistant volume. I see the code for the persistent volueme claim, but not persistent volume itself. So I had to create this first.
apiVersion: v1
kind: PersistentVolume
metadata:
  labels:
    type: local
  name: task-pv-volume
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  hostPath:
    path: /tmp/data
  persistentVolumeReclaimPolicy: Retain
  1. There is a type in agent-technologies-bachelor/agents/config/kubernetes/postgres/secrets-definition.yml term password.
data:
  user: c3RlZmFuX3JhZG9uamlj #stefan_radonjic
  passowrd: sdfsdfsd #cepajecar995

Because of this the postgres instance was not able to startup. I found this by looking into the events by running command kubectl describe pods

  1. The Docker image didn't have a command to execute the application in it. As a result, If I ran your docker image at cepa995/agents_web it would simply exit and not run any application. This is why the django application was not running. To fix this, I modified the Dockerfile to add a CMD instruction at the end. I see you put this in the docker-compose to build the image, but this command has to be inside the Dockerfile itself. The Dockerfile looks like this now:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies
RUN pip install -r src/requirements.txt
EXPOSE 8000
CMD python src/manage.py runserver 0.0.0.0:8000 --settings=agents.config.settings

2 Comments

Thank you so much for your help! I really appreciate it! Would you please let me know couple of things. First, did you run my application on Minikube or Google Cloud Kubernetes cluster? I am asking because, I read on docummentation that VolumeClaim can act as a Volume aswell, and since on Google Cloud cluster i already have PD on cluster Nodes, I thought I do not need PersistentVolume object. Second, do I need docker-compose file in the end? Or I just build my Dockefile, post image on my repo and thats it? Once again, I appreaciate all your help!
I tried this on play-with-k8s.com. I had to create a separate PV. You probably don't need that in your case then. You do not need a docker-compose file. If you have successfully built the image and pushed to your repository. But make sure it has the CMD instruction too.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.