Ollama¶
Since v0.29.0
Introduction¶
The Testcontainers module for Ollama.
Adding this module to your project dependencies¶
Please run the following command to add the Ollama module to your Go dependencies:
go get github.com/testcontainers/testcontainers-go/modules/ollama
Usage example¶
The module allows you to run the Ollama container or the local Ollama binary.
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.3.13", tcollama.WithUseLocal("OLLAMA_DEBUG=true"))
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
If the local Ollama binary fails to execute, the module will fall back to the container version of Ollama.
Module Reference¶
Run function¶
- Since v0.32.0
Info
The RunContainer(ctx, opts...)
function is deprecated and will be removed in the next major release of Testcontainers for Go.
The Ollama module exposes one entrypoint function to create the Ollama container, and this function receives three parameters:
func Run(ctx context.Context, img string, opts ...testcontainers.ContainerCustomizer) (*OllamaContainer, error)
context.Context
, the Go context.string
, the Docker image to use.testcontainers.ContainerCustomizer
, a variadic argument for passing options.
Image¶
Use the second argument in the Run
function to set a valid Docker image.
In example: Run(context.Background(), "ollama/ollama:0.5.7")
.
Container Options¶
When starting the Ollama container, you can pass options in a variadic way to configure it.
Use Local¶
- Since v0.35.0
Warning
Please make sure the local Ollama binary is not running when using the local version of the module: Ollama can be started as a system service, or as part of the Ollama application, and interacting with the logs of a running Ollama process not managed by the module is not supported.
If you need to run the local Ollama binary, you can set the UseLocal
option in the Run
function.
This option accepts a list of environment variables as a string, that will be applied to the Ollama binary when executing commands.
E.g. Run(context.Background(), "ollama/ollama:0.5.7", WithUseLocal("OLLAMA_DEBUG=true"))
.
All the container methods are available when using the local Ollama binary, but will be executed locally instead of inside the container. Please consider the following differences when using the local Ollama binary:
- The local Ollama binary will create a log file in the current working directory, identified by the session ID. E.g.
local-ollama-<session-id>.log
. It's possible to set the log file name using theOLLAMA_LOGFILE
environment variable. So if you're running Ollama yourself, from the Ollama app, or the standalone binary, you could use this environment variable to set the same log file name. - For the Ollama app, the default log file resides in the
$HOME/.ollama/logs/server.log
. - For the standalone binary, you should start it redirecting the logs to a file. E.g.
ollama serve > /tmp/ollama.log 2>&1
. ConnectionString
returns the connection string to connect to the local Ollama binary started by the module instead of the container.ContainerIP
returns the bound host IP127.0.0.1
by default.ContainerIPs
returns the bound host IP["127.0.0.1"]
by default.CopyToContainer
,CopyDirToContainer
,CopyFileToContainer
andCopyFileFromContainer
return an error if called.GetLogProductionErrorChannel
returns a nil channel.Endpoint
returns the endpoint to connect to the local Ollama binary started by the module instead of the container.Exec
passes the command to the local Ollama binary started by the module instead of inside the container. First argument is the command to execute, and the second argument is the list of arguments, else, an error is returned.GetContainerID
returns the container ID of the local Ollama binary started by the module instead of the container, which maps tolocal-ollama-<session-id>
.Host
returns the bound host IP127.0.0.1
by default.Inspect
returns a ContainerJSON with the state of the local Ollama binary started by the module.IsRunning
returns true if the local Ollama binary process started by the module is running.Logs
returns the logs from the local Ollama binary started by the module instead of the container.MappedPort
returns the port mapping for the local Ollama binary started by the module instead of the container.Start
starts the local Ollama binary process.State
returns the current state of the local Ollama binary process,stopped
orrunning
.Stop
stops the local Ollama binary process.Terminate
calls theStop
method and then removes the log file.
The local Ollama binary will create a log file in the current working directory, and it will be available in the container's Logs
method.
Info
The local Ollama binary will use the OLLAMA_HOST
environment variable to set the host and port to listen on.
If the environment variable is not set, it will default to localhost:0
which bind to a loopback address on an ephemeral port to avoid port conflicts.
The following options are exposed by the testcontainers
package.
Basic Options¶
WithExposedPorts
Since v0.37.0WithEnv
Since v0.29.0WithWaitStrategy
Since v0.20.0WithAdditionalWaitStrategy
Not available until the next release mainWithWaitStrategyAndDeadline
Since v0.20.0WithAdditionalWaitStrategyAndDeadline
Not available until the next release mainWithEntrypoint
Since v0.37.0WithEntrypointArgs
Since v0.37.0WithCmd
Since v0.37.0WithCmdArgs
Since v0.37.0WithLabels
Since v0.37.0
Lifecycle Options¶
WithLifecycleHooks
Not available until the next release mainWithAdditionalLifecycleHooks
Not available until the next release mainWithStartupCommand
Since v0.25.0WithAfterReadyCommand
Since v0.28.0
Files & Mounts Options¶
WithFiles
Since v0.37.0WithMounts
Since v0.37.0WithTmpfs
Since v0.37.0WithImageMount
Since v0.37.0
Build Options¶
WithDockerfile
Since v0.37.0
Logging Options¶
WithLogConsumers
Since v0.28.0WithLogConsumerConfig
Not available until the next release mainWithLogger
Since v0.29.0
Image Options¶
WithAlwaysPull
Not available until the next release mainWithImageSubstitutors
Since v0.26.0WithImagePlatform
Not available until the next release main
Networking Options¶
WithNetwork
Since v0.27.0WithNetworkByName
Not available until the next release mainWithBridgeNetwork
Not available until the next release mainWithNewNetwork
Since v0.27.0
Advanced Options¶
WithHostPortAccess
Since v0.31.0WithConfigModifier
Since v0.20.0WithHostConfigModifier
Since v0.20.0WithEndpointSettingsModifier
Since v0.20.0CustomizeRequest
Since v0.20.0WithName
Not available until the next release mainWithNoStart
Not available until the next release main
Experimental Options¶
WithReuseByName
Since v0.37.0
Container Methods¶
The Ollama container exposes the following methods:
ConnectionString¶
- Since v0.29.0
This method returns the connection string to connect to the Ollama container, using the default 11434
port.
connectionStr, err := ctr.ConnectionString(ctx)
Commit¶
- Since v0.29.0
This method commits the container to a new image, returning the new image ID. It should be used after a model has been pulled and loaded into the container in order to create a new image with the model, and eventually use it as the base image for a new container. That will speed up the execution of the following containers.
// Defining the target image name based on the default image and a random string.
// Users can change the way this is generated, but it should be unique.
targetImage := fmt.Sprintf("%s-%s", ollama.DefaultOllamaImage, strings.ToLower(uuid.New().String()[:4]))
err := ctr.Commit(context.Background(), targetImage)
Examples¶
Loading Models¶
It's possible to initialise the Ollama container with a specific model passed as parameter. The supported models are described in the Ollama project: https://github.com/ollama/ollama?tab=readme-ov-file and https://ollama.com/library.
Warning
At the moment you use one of those models, the Ollama image will load the model and could take longer to start because of that.
The following examples use the llama2
model to connect to the Ollama container using HTTP and Langchain.
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
model := "llama2"
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "pull", model})
if err != nil {
log.Printf("failed to pull model %s: %s", model, err)
return
}
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "run", model})
if err != nil {
log.Printf("failed to run model %s: %s", model, err)
return
}
connectionStr, err := ollamaContainer.ConnectionString(ctx)
if err != nil {
log.Printf("failed to get connection string: %s", err)
return
}
httpClient := &http.Client{}
// generate a response
payload := `{
"model": "llama2",
"prompt":"Why is the sky blue?"
}`
req, err := http.NewRequest(http.MethodPost, connectionStr+"/api/generate", strings.NewReader(payload))
if err != nil {
log.Printf("failed to create request: %s", err)
return
}
resp, err := httpClient.Do(req)
if err != nil {
log.Printf("failed to get response: %s", err)
return
}
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
model := "llama2"
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "pull", model})
if err != nil {
log.Printf("failed to pull model %s: %s", model, err)
return
}
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "run", model})
if err != nil {
log.Printf("failed to run model %s: %s", model, err)
return
}
connectionStr, err := ollamaContainer.ConnectionString(ctx)
if err != nil {
log.Printf("failed to get connection string: %s", err)
return
}
var llm *langchainollama.LLM
if llm, err = langchainollama.New(
langchainollama.WithModel(model),
langchainollama.WithServerURL(connectionStr),
); err != nil {
log.Printf("failed to create langchain ollama: %s", err)
return
}
completion, err := llm.Call(
context.Background(),
"how can Testcontainers help with testing?",
llms.WithSeed(42), // the lower the seed, the more deterministic the completion
llms.WithTemperature(0.0), // the lower the temperature, the more creative the completion
)
if err != nil {
log.Printf("failed to create langchain ollama: %s", err)
return
}
words := []string{
"easy", "isolation", "consistency",
}
lwCompletion := strings.ToLower(completion)
for _, word := range words {
if strings.Contains(lwCompletion, word) {
fmt.Println(true)
}
}