Skip to main content

Running the Application

To run the application, ensure the database is running, then start the Flask server locally.

1. Start the Server

python run.py

The application will be available at http://localhost:5011 (or the port defined in your .env).

2. Data Migration (Optional)

If you need to migrate legacy JSON data to the new database, run:

python migrate_data.py

3. External Services

Ensure your LLM server (Ollama/LMStudio) and optional TTS server are running as described below.

LLM Server (Ollama / LMStudio / etc.)

Ensure your LLM server is running and accessible. The application uses the OpenAI API protocol for all LLM providers (including Ollama).

If using Ollama:

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Coqui TTS Server (TTS) (Optional: also experimental)

This service provides high-quality, human-like text-to-speech. The repository includes a Dockerfile for Coqui TTS in the coqui_tts directory.

First, build the Docker image:

cd coqui_tts
sudo docker build -t coqui-chanakya-tts .
cd ..

Then, run the Docker container:

sudo docker run -d -p 5001:5002 --gpus all --restart unless-stopped --name coqui-tts-server coqui-chanakya-tts

This will start the TTS server on port 5001 of your host machine, connected to port 5002 inside the container.