Category Docker

Building a Local GenAI Service with Ollama, Mistral, and Go

Running Large Language Models (LLMs) locally provides data privacy, zero latency costs, and full control over your inference environment. This guide demonstrates how to containerize Ollama, automate the Mistral model download, and expose it through an Nginx reverse proxy to…

Run AWS ECR Image via Docker Compose and Environment Variables

Using Docker Compose to run a Docker container with environment variables from an AWS ECR image involves creating a docker-compose.yml file and defining the necessary configurations. Here’s how you can do it: Replace placeholders like <account-id>, <region>, <repository-name>, <tag>, VAR_NAME1,…

AWS Credentials in a Dockerfile

Using AWS credentials from your local machine in a Docker build process requires careful handling to ensure security. Here’s a step-by-step guide on how to do it: This approach avoids embedding the credentials in the image, but it makes them…