Skip to content

NebulaGraph

Not available until the next release main

Introduction

The Testcontainers module for NebulaGraph, a distributed, scalable, and lightning-fast graph database. This module manages a complete NebulaGraph cluster including Meta Service, Storage Service, and Graph Service components.

Adding this module to your project dependencies

Add the NebulaGraph module to your Go dependencies:

go get github.com/testcontainers/testcontainers-go/modules/nebulagraph

Usage example

ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()

container, err := nebulagraph.RunCluster(ctx,
    defaultGraphdImage, []testcontainers.ContainerCustomizer{},
    defaultStoragedImage, []testcontainers.ContainerCustomizer{},
    defaultMetadImage, []testcontainers.ContainerCustomizer{},
)
require.NoError(t, err)
t.Cleanup(func() { _ = container.Terminate(ctx) })

conn, err := container.ConnectionString(ctx)
require.NoError(t, err)
require.NotEmpty(t, conn)

// Parse the connection string to get host and port
host, portt, err := net.SplitHostPort(conn)
require.NoError(t, err)

portInt, err := strconv.Atoi(portt)
require.NoError(t, err)

// Create client factory
clientFactory := nebula_sirius.NewNebulaClientFactory(
    &nebula_sirius.NebulaClientConfig{
        HostAddress: nebula_sirius.HostAddress{
            Host: host,
            Port: portInt,
        },
    },
    nebula_sirius.DefaultLogger{},
    nebula_sirius.DefaultClientNameGenerator,
)

// Create client pool
nebulaClientPool := pool.NewObjectPool(
    ctx,
    clientFactory,
    &pool.ObjectPoolConfig{
        MaxIdle:  5,
        MaxTotal: 10,
    },
)

// Test client connection and basic queries
t.Run("basic-operations", func(t *testing.T) {
    // Get a client from the pool
    clientObj, err := nebulaClientPool.BorrowObject(ctx)
    require.NoError(t, err)
    defer func() {
        err := nebulaClientPool.ReturnObject(ctx, clientObj)
        require.NoError(t, err)
    }()

    client := clientObj.(*nebula_sirius.WrappedNebulaClient)
    require.NotNil(t, client)

    // Get graph client
    g, err := client.GraphClient()
    require.NoError(t, err)

    // Authenticate
    auth, err := g.Authenticate(ctx, []byte("root"), []byte("nebula"))
    require.NoError(t, err)
    require.Equal(t, nebula.ErrorCode_SUCCEEDED, auth.GetErrorCode(), "Auth error: %s", auth.GetErrorMsg())

    // Test YIELD query
    result, err := g.Execute(ctx, *auth.SessionID, []byte("YIELD 1;"))
    require.NoError(t, err)
    require.Equal(t, nebula.ErrorCode_SUCCEEDED, result.GetErrorCode(), "Query error: %s", result.GetErrorMsg())

    // Validate result contains our storage node
    resultSet, err := nebula_sirius.GenResultSet(result)
    require.NoError(t, err)

    // Convert result to string for validation
    rows := resultSet.GetRows()
    require.NotEmpty(t, rows, "Expected at least one row in YIELD output")

    row := rows[0]
    require.NotNil(t, row, "Row should not be nil")

    vals := row.GetValues()
    require.NotEmpty(t, vals, "Row values should not be empty")
    require.Equal(t, int64(1), vals[0].GetIVal())
})

Module Reference

RunCluster function

  • Not available until the next release main

The NebulaGraph module provides a function to create a complete NebulaGraph cluster within a Docker network:

func RunCluster(ctx context.Context,
    graphdImg string, graphdCustomizers []testcontainers.ContainerCustomizer,
    storagedImg string, storagedCustomizers []testcontainers.ContainerCustomizer,
    metadImg string, metadCustomizers []testcontainers.ContainerCustomizer,
) (*Cluster, error)

This function creates a complete NebulaGraph cluster with customizable settings. It returns a Cluster struct that contains references to all four components: - Meta Service (metad) - Storage Service (storaged) - Graph Service (graphd)

Default Configuration

The module uses the following default configurations:

Default Images

  • Graph Service: vesoft/nebula-graphd:v3.8.0
  • Meta Service: vesoft/nebula-metad:v3.8.0
  • Storage Service: vesoft/nebula-storaged:v3.8.0

Exposed Ports

  • Graph Service: 9669 (TCP), 19669 (HTTP)
  • Meta Service: 9559 (TCP), 19559 (HTTP)
  • Storage Service: 9779 (TCP), 19779 (HTTP)

Health Checks

The module implements health checks for all services:

  • Meta Service: HTTP health check on /status endpoint (port 19559)
  • Graph Service: HTTP health check on /status endpoint (port 19669)
  • Storage Service: Log-based health check for initialization
  • Activator Service: Log-based health check and exit status for storage registration

A cluster is considered ready when:

  1. Meta service is healthy and accessible
  2. Graph service is healthy and accessible
  3. Storage service is initialized and running
  4. Storage service is successfully registered with the meta service via the activator

Container Options

When starting the NebulaGraph container, you can pass options in a variadic way to configure it.

The module supports customization for each service container (Meta, Storage, Graph, and Activator) through ContainerCustomizer options. Common customizations include:

  • Custom images for each service
  • Environment variables
  • Resource limits
  • Network settings
  • Volume mounts
  • Wait strategies

The following options are exposed by the testcontainers package.

Basic Options

Lifecycle Options

Files & Mounts Options

Build Options

Logging Options

Image Options

Networking Options

Advanced Options

Experimental Options

Container Methods

The Cluster struct provides the following methods:

ConnectionString

  • Not available until the next release main

Returns the host:port string for connecting to the NebulaGraph graph service (graphd).

func (c *Cluster) ConnectionString(ctx context.Context) (string, error)

Terminate

  • Not available until the next release main

Stops and removes all containers in the NebulaGraph cluster (Meta, Storage, Graph, and Activator services) and cleans up the associated Docker network.

func (c *Cluster) Terminate(ctx context.Context) error