Tag docker

Building a Local GenAI Service with Ollama, Mistral, and Go

Running Large Language Models (LLMs) locally provides data privacy, zero latency costs, and full control over your inference environment. This guide demonstrates how to containerize Ollama, automate the Mistral model download, and expose it through an Nginx reverse proxy to…

AWS Credentials in a Dockerfile

Using AWS credentials from your local machine in a Docker build process requires careful handling to ensure security. Here’s a step-by-step guide on how to do it: This approach avoids embedding the credentials in the image, but it makes them…