mirror of
https://github.com/docker/hello-genai
synced 2026-04-05 19:44:32 +00:00
No description
- HTML 35.5%
- JavaScript 19.9%
- CSS 14.6%
- Go 11.1%
- Rust 8.3%
- Other 10.6%
* fix(compose): use models attribute Signed-off-by: Dorin Geman <dorin.geman@docker.com> * test: use jq for robust JSON validation in smoke tests Signed-off-by: Dorin Geman <dorin.geman@docker.com> * tests: use setup-compose-action Signed-off-by: Dorin Geman <dorin.geman@docker.com> * fix(rust-genai): handle !modelinfo Signed-off-by: Dorin Geman <dorin.geman@docker.com> --------- Signed-off-by: Dorin Geman <dorin.geman@docker.com> |
||
|---|---|---|
| .github/workflows | ||
| go-genai | ||
| node-genai | ||
| py-genai | ||
| rust-genai | ||
| .gitignore | ||
| docker-compose.yml | ||
| Dockerfile | ||
| LICENSE | ||
| README.md | ||
hello-genai
A simple chatbot web application built in Go, Python and Node.js that connects to a local LLM service (llama.cpp) to provide AI-powered responses.
Environment Variables
The application uses the following environment variables defined in the .env file:
LLM_BASE_URL: The base URL of the LLM APILLM_MODEL_NAME: The model name to use
To change these settings, simply edit the .env file in the root directory of the project.
Quick Start
-
Clone the repository:
git clone https://github.com/docker/hello-genai cd hello-genai -
Start the application using Docker Compose:
docker compose up -
Open your browser and visit the following links:
http://localhost:8080 for the GenAI Application in Go
http://localhost:8081 for the GenAI Application in Python
http://localhost:8082 for the GenAI Application in Node
http://localhost:8083 for the GenAI Application in Rust
Requirements
- macOS (recent version)
- Either:
- Docker and Docker Compose (preferred)
- Go 1.21 or later
- Local LLM server
If you're using a different LLM server configuration, you may need to modify the.env file.