CCM is a small-scale Configuration Management system designed to meet users where they are - enabling experimentation, R&D, and exploration without the overhead of full-system management while still following sound Configuration Management principles.
We focus on great UX, immediate feedback, and interactive use with minimal friction.
Small and Focused
Embraces the popular package-config-service style of Configuration Management.
Focused on the needs of a single application or unit of software.
Minimal design for easy adoption, experimentation, and integration with tools like LLMs.
Loves Snowflakes
Brings true Configuration Management principles to ad-hoc systems, enabling use cases where traditional CM fails.
Use management bundles in an à la carte fashion to quickly bring experimental infrastructure to a known state.
External Data
Rich, Hierarchical Data and Facts accessible in the command line, scripts, and manifests.
Democratizes and opens the data that drives systems management to the rest of your stack.
No Dependencies
Zero-dependency binaries, statically linked and fast.
Easy deployment in any environment without the overhead-cost of traditional CM.
Embraces a Just Works philosophy with no, or minimal, configuration.
Optional Networking
Optional Network infrastructure needed only when your needs expand.
Choose from simple webservers to clustered, reliable, Object and Key-Value stores using technology you already know.
Everywhere
Great at the CLI, shell scripts, YAML manifests, Choria Agents, and Go applications.
Scales from a single IoT Raspberry Pi 3 to millions of nodes.
Integrates easily with other software via SDK or Unix-like APIs.
Shell example
Here we do a package-config-service style deployment using a shell script. The script is safe to run multiple times as the CCM commands are all idempotent.
When run, this will create a session in a temporary directory and manage the resources. If the file resource changes after initial deployment, the service will restart.
We support dynamic data on the CLI, ccm will read .env files and .hiera files and feed that into the runtime data. Using this even shell scripts can easily gain access to rich data.
Here we define the inputs in data and the Hiera hierarchy along with OS-specific overrides. The data is referenced in the manifest using the {{ Data.package_name }} syntax.
Subsections of Introduction
Resources
Resources describe the desired state of your infrastructure. Each resource represents something to manage and is backed by a provider that implements platform-specific management logic.
Every resource has a type, a unique name, and resource-specific properties.
Resource types
Apply: Compose manifests from smaller reusable manifests
Archive: Download, extract and copy files from tar.gz and zip archives
This installs zsh on all linux machines unless they are running inside a docker container.
The following table shows how the two conditions interact:
if
unless
Resource Managed?
(not set)
(not set)
Yes
true
(not set)
Yes
false
(not set)
No
(not set)
true
No
(not set)
false
Yes
true
true
No
true
false
Yes
false
true
No
false
false
No
Subsections of Resources
Apply
The apply resource resolves and executes a child manifest within the parent manifest’s execution context. Child manifests share the parent’s session, enabling resource ordering and subscribe relationships across manifest boundaries.
Note
The apply resource is manifest-only. It has no CLI or API equivalent. Only local file paths are supported; URL-based manifest sources may be added in future.
This executes two child manifests in order. The second receives additional data that its templates can reference.
Ensure values
Value
Description
present
Resolve and execute the child manifest
Only present is valid. The ensure property defaults to present if not specified.
Properties
Property
Description
name
File path to the child manifest (relative to parent manifest directory)
noop
Execute child in noop mode (can only strengthen, never weaken)
health_check_only
Execute child in health check mode (can only strengthen, never weaken)
allow_apply
Allow the child manifest to contain its own apply resources (default: true)
data
Data map passed to the child manifest, merged with resolved data
Path resolution
The name property specifies a file path relative to the directory containing the parent manifest. Absolute paths are used as-is.
# Given /opt/ccm/manifest.yaml contains:- apply:
- sub/manifest.yaml: {}
# resolves to /opt/ccm/sub/manifest.yaml
For nested apply resources, each level resolves paths relative to its own manifest’s directory. After a child manifest completes, the working directory reverts to the parent’s directory.
Noop and health check modes propagate downward through apply resources. A child can enter noop or health check mode when its parent has not, but a child can never weaken a mode that the parent has active.
Parent mode
Resource property
Effective child mode
noop
noop: false
noop
noop
noop: true
noop
normal
noop: true
noop
normal
noop: false
normal
Health check mode follows the same pattern. If either the parent or the resource enables health check mode, the child executes in health check mode.
Running ccm apply manifest.yaml --noop forces noop on all child manifests regardless of their noop property.
Passing data to child manifests
The data property provides key-value data to the child manifest. This data merges into the child’s resolved data after its own Hiera resolution completes.
- apply:
- app/manifest.yaml:
data:
port: 8080log_level: info
Templates in the child manifest can reference these values using standard template syntax, such as {{ lookup("data.port") }}. External data from CLI --data flags persists through the merge and takes precedence.
Restricting nested apply resources
The allow_apply property controls whether a child manifest may contain its own apply resources. Setting allow_apply to false limits the trust boundary when including manifests authored by others.
The exec resource runs only if the child manifest made changes.
Shared session
The child manifest executes within the parent’s session. Resource events from child manifests are recorded in the same session as the parent, and subscribe relationships work across manifest boundaries.
If any child resource fails, the apply resource reports the failure. If the enclosing manifest sets fail_on_error: true, execution of subsequent resources in that manifest stops at that point.
Archive
The archive resource downloads and extracts archives from HTTP/HTTPS URLs. It supports tar.gz, tgz, tar, and zip formats.
Note
The archive file path (name) must have the same archive type extension as the URL. For example, if the URL ends in .tar.gz, the name must also end in .tar.gz.
This downloads the archive, extracts it to /opt/app, and removes the archive file after extraction. Future runs skip the download if /opt/app/bin/app exists.
Ensure values
Value
Description
present
The archive must be downloaded
absent
The archive file must not exist
Properties
Property
Description
name
Absolute path where the archive will be saved
url
HTTP/HTTPS URL to download the archive from
checksum
Expected SHA256 checksum of the downloaded file
extract_parent
Directory to extract the archive contents into
creates
File path; if this file exists, the archive is not downloaded or extracted
cleanup
Remove the archive file after successful extraction (requires extract_parent and creates)
owner
Owner of the downloaded archive file (username)
group
Group of the downloaded archive file (group name)
username
Username for HTTP Basic Authentication
password
Password for HTTP Basic Authentication
headers
Additional HTTP headers to send with the request (map of header name to value)
provider
Force a specific provider (http only)
Authentication
The archive resource supports two authentication methods:
The archive resource is idempotent through multiple mechanisms:
Checksum verification: If a checksum is provided and the existing file matches, no download occurs.
Creates file: If creates is specified and that file exists, neither download nor extraction occurs.
File existence: If the archive file exists with matching checksum and owner/group, no changes are made.
For best idempotency, always specify either checksum or creates (or both).
Cleanup behavior
When cleanup: true is set:
The archive file is deleted after successful extraction
The extract_parent property is required
The creates property is required to track extraction state across runs
Supported archive formats
Extension
Extraction Tool
.tar.gz, .tgz
tar -xzf
.tar
tar -xf
.zip
unzip
Note
The extraction tools (tar, unzip) must be available in the system PATH.
Exec
The exec resource executes commands to bring the system into the desired state. It is idempotent when used with the creates, onlyif, or unless properties, or refreshonly mode.
Warning
Specify commands with their full path, or use the path property to set the search path.
The shell provider passes the entire command string to /bin/sh -c, so shell quoting rules apply. The posix provider parses arguments using shell-like quoting but does not invoke a shell.
Properties
Property
Description
name
The command to execute (used as the resource identifier)
command
Alternative command to run instead of name
cwd
Working directory for command execution
environment (array)
Environment variables in KEY=VALUE format
path
Search path for executables as a colon-separated list (e.g., /usr/bin:/bin)
returns (array)
Exit codes indicating success (default: [0])
timeout
Maximum execution time (e.g., 30s, 5m); command is killed if exceeded
creates
File path; if this file exists, the command does not run
onlyif
Guard command; the exec runs only if this command exits 0
unless
Guard command; the exec runs only if this command exits non-zero
refreshonly (boolean)
Only run when notified by a subscribed resource
subscribe (array)
Resources to subscribe to for refresh notifications (type#name or type#alias)
logoutput (boolean)
Log the command output
provider
Force a specific provider (posix or shell)
Guard commands
The onlyif and unless properties act as guard commands that control whether the exec runs. They are evaluated before execution and share the exec’s cwd, environment, and path settings. Guard commands run even in noop mode to accurately report what would happen.
When creates is also set, it takes precedence: if the creates file exists, the command is skipped regardless of guard results. Subscribe-triggered refreshes override all guards.
- exec:
- install-app:
command: /usr/local/bin/install-app.shonlyif: test -f /tmp/app-package.tar.gz# Runs only if the package file exists - configure-firewall:
command: /usr/sbin/iptables -A INPUT -p tcp --dport 8080 -j ACCEPTunless: /usr/sbin/iptables -C INPUT -p tcp --dport 8080 -j ACCEPT# Runs only if the iptables rule does not already exist
# Runs only if the package file exists
ccm ensure exec /usr/local/bin/install-app.sh --exec-if "test -f /tmp/app-package.tar.gz"
# Runs only if the iptables rule does not already exist
ccm ensure exec "/usr/sbin/iptables -A INPUT -p tcp --dport 8080 -j ACCEPT" \
--exec-unless "/usr/sbin/iptables -C INPUT -p tcp --dport 8080 -j ACCEPT"
This creates /etc/motd with the given content, parsed through the template engine, and sets ownership and permissions.
Ensure values
Value
Description
present
The file must exist
absent
The file must not exist
directory
The path must be a directory
Properties
Property
Description
name
Absolute path to the file
ensure
Desired state (present, absent, directory)
content
File contents, parsed through the template engine
source
Copy contents from another local file
owner
File owner (username)
group
File group (group name)
mode
File permissions in octal notation (e.g., "0644"). For directories, the execute bit is added automatically to any permission triad that has read or write bits (e.g., "0644" becomes "0755")
provider
Force a specific provider (posix only)
Package
The package resource manages system packages. Specify whether the package should be present, absent, at the latest version, or at a specific version.
Warning
Use real package names, not virtual names, aliases, or group names.
The APT provider preserves existing configuration files during package installation and upgrades. When a package is upgraded and the maintainer has provided a new version of a configuration file, the existing file is kept (--force-confold behavior).
Packages in a partially installed or config-files state (removed but configuration remains) are treated as absent. Reinstalling such packages will preserve the existing configuration files.
Note
The provider will not run apt update before installing a package. Use an exec resource to update the package index if necessary.
The provider runs non-interactively and suppresses prompts from apt-listbugs and apt-listchanges.
Scaffold
The scaffold resource renders files from a source template directory to a target directory. Templates have access to facts and Hiera data, enabling dynamic configuration generation from directory structures.
Warning
Target paths must be absolute and canonical (no . or .. components).
This renders templates from the templates/app directory into /etc/app using the Jet template engine, removing any files in the target not present in the source.
Note
This is implemented using the github.com/choria-io/scaffold Go library. Use this in other projects or use the included scaffold CLI tool.
Ensure values
Value
Description
present
Target directory must exist with rendered template files
absent
Managed files must be removed; target directory removed if empty
Properties
Property
Description
name
Absolute path to the target directory
source
Source template directory path (relative to working directory or absolute)
engine
Template engine: go or jet (default: jet)
skip_empty
Do not create empty files in rendered output
left_delimiter
Custom left template delimiter
right_delimiter
Custom right template delimiter
purge
Remove files in target not present in source
data
Custom data map that replaces Hiera data for template rendering
post
Post-processing commands: glob pattern to command mapping
provider
Force a specific provider (choria only)
Template engines
Two template engines are supported:
Engine
Library
Default Delimiters
Description
go
Go text/template
{{ / }}
Standard Go templates
jet
Jet templating
[[ / ]]
Jet template language
The engine defaults to jet if not specified. Delimiters can be customized via left_delimiter and right_delimiter.
The post property defines commands to run on rendered files. Each entry is a map where the key is a glob pattern matched against the file’s basename and the value is a command to execute. Use {} in the command as a placeholder for the file’s full path; if omitted, the path is appended as the last argument.
Post-processing runs immediately after each file is rendered. Files skipped due to skip_empty are not post-processed.
Purge behavior
When purge: true is set, files in the target directory that are not present in the source template directory are deleted during rendering. In noop mode, these deletions are logged but not performed.
When purge is disabled (the default), files not present in the source are tracked but not removed. They do not affect idempotency checks for ensure: present, meaning the resource is considered stable even if extra files exist in the target.
Removal behavior
When ensure: absent, only managed files (changed and stable) are removed. Files not belonging to the scaffold (purged files) are left untouched. After removing managed files and empty subdirectories, the target directory itself is removed on a best-effort basis; it is only deleted if empty. If unrelated files remain, the directory is preserved and no error is raised.
Idempotency
The scaffold resource determines idempotency by rendering templates in noop mode and comparing results against the target directory.
For ensure: present:
Changed files: Files that would be created or modified. Any changed files make the resource unstable.
Stable files: Files whose content matches the rendered output. At least one stable file must exist for the resource to be considered stable.
Purged files: Files in the target not present in the source. These only affect stability when purge is enabled.
For ensure: absent, the status check filters Changed and Stable lists to only include files that actually exist on disk. This means after a successful removal, the scaffold is considered absent even if the target directory still exists with unrelated files. Purged files never affect the absent stability check.
Source resolution
The source property is resolved relative to the manager’s working directory when it is a relative path. URL sources (with a scheme) are passed through unchanged. This allows manifests bundled with template directories to use relative paths.
Template environment
Templates receive the full template environment, which provides access to:
facts - System facts for the managed node
data - Hiera-resolved configuration data, or custom data when the data property is set
Template helper functions
Custom data
The data property allows supplying a custom data map that completely replaces the Hiera-resolved data for template rendering. This is useful when a scaffold needs data that differs from or is unrelated to the global Hiera data.
When data is set, templates see only the custom data through data — the Hiera data is not merged, it is replaced entirely. Facts remain available regardless.
String values in the data map support template expressions that are resolved before rendering:
Non-string values (integers, booleans, lists, maps) are preserved as-is without template resolution.
Creating scaffolds
A scaffold source is a directory tree where every file is a template. The directory structure is mirrored directly into the target, so the source layout becomes the output layout.
The Jet engine is the default because its [[ / ]] delimiters avoid conflicts with configuration files that use curly braces (YAML, JSON, systemd units). Use the Go engine when you need access to Sprig functions.
Partials
Files inside a _partials directory are reusable template fragments. They are rendered on demand using the render function but are excluded from the output.
This is useful for shared headers, repeated configuration blocks, or any content used across multiple files.
Two functions are available in both template engines:
render evaluates another template file from the source directory and returns its output as a string. The partial is rendered using the same engine and data as the calling template.
[[ render("_partials/database.conf", .) ]]
{{ render "_partials/database.conf" . }}
write creates an additional file in the target directory from within a template. This is useful for dynamically generating files based on data — for example, creating one configuration file per service.
[[ write("extra.conf", "generated content") ]]
{{ write "extra.conf" "generated content" }}
Sprig functions
When using the Go template engine, all Sprig template functions are available. These provide string manipulation, math, date formatting, list operations, and more:
# Go engine example with Sprig functions
hostname: {{ .facts.hostname | upper }}
packages: {{ join ", " .data.packages }}
generated: {{ now | date "2006-01-02" }}
Example scaffold
A complete scaffold for an application configuration:
log_level = info
log_file = /var/log/myapp/web01.log
[server]
bind = 0.0.0.0
port = 8080
workers = 4
Service
The service resource manages system services. Services have two independent properties: whether they are running and whether they are enabled to start at boot.
Warning
Use real service names, not virtual names or aliases.
Services can subscribe to other resources and restart when those resources change.
If ensure is not specified, it defaults to running.
Properties
Property
Description
name
Service name
ensure
Desired state (running or stopped; default: running)
enable (boolean)
Enable the service to start at boot
subscribe (array)
Resources to watch; restart the service when they change (type#name or type#alias)
provider
Force a specific provider (systemd only)
Hierarchical Data
The Choria Hierarchical Data Resolver is a small data resolver inspired by Hiera. It evaluates a YAML or JSON document alongside a set of facts to produce a final data map.
The resolver supports first and deep merge strategies and relies on expression-based string interpolation for hierarchy entries. It is optimized for single files that hold both hierarchy and data, rather than the multi-file approach common in Hiera.
Type-preserving lookups (returns typed data, not just strings)
Command-line tool with built-in system facts
Go library for embedding
Alternate variable syntax
Since version 0.0.25 you can use ${ lookup(...) } as well as {{ lookup(..) }}
Usage
An annotated example:
hierarchy:
# Lookup and override order - facts are resolved in expressions# Use GJSON path syntax for nested facts: ${ lookup('facts.host.info.hostname') }order:
- env:${ lookup('facts.env') } - role:${ lookup('facts.role') } - host:${ lookup('facts.hostname') }merge: deep # "deep" merges all matches; "first" stops at first match# Base data - hierarchy results are merged into thisdata:
log_level: INFOpackages:
- ca-certificatesweb:
listen_port: 80tls: false# Override sections keyed by hierarchy order entriesoverrides:
env:prod:
log_level: WARNrole:web:
packages:
- nginxweb:
listen_port: 443tls: truehost:web01:
log_level: TRACE
The templating here is identical to that in the Template documentation, except only the lookup() function is available (no file access functions).
Default Hierarchy
If no hierarchy section is provided, the resolver uses a default hierarchy of ["default"].
CLI example
The ccm hiera command resolves hierarchy files with facts. It is designed to be a generally usable tool, with flexible options for providing facts.
Facts can come from multiple sources, which are merged together.
System facts
Use -S or --system-facts:
# View system facts
$ ccm hiera facts -S
# Resolve using system facts
$ ccm hiera parse data.json -S
Environment variables as facts
Use -E or --env-facts:
ccm hiera parse data.json -E
Facts file
Use --facts FILE:
ccm hiera parse data.json --facts facts.yaml
Command-line facts
Pass key=value pairs as positional arguments:
ccm hiera parse data.json env=prod role=web
All fact sources can be combined. Command-line facts take highest precedence.
Data in NATS
NATS is a lightweight messaging system that supports Key-Value stores. Hierarchy data can be stored in NATS and used with ccm ensure and ccm hiera commands.
To use NATS as a hierarchy store, configure a NATS context for authentication:
YAML comments in the data: section can carry annotation directives that validate resolved values. Annotations are always active; there is no opt-in flag.
Two directives are supported:
Directive
Description
@require
Value must not be nil or empty string. false and 0 are valid
@validate <expr>
Value is checked with the given validation expression
@required is accepted as an alias for @require.
Example
data:
# The username to run the process as# @requireuser: bob# @validate isShellSafe(value)command: "/usr/bin/thing"# @require# @validate isIPv4(value)listen_address: 10.0.0.1# No annotations, no validationport: 8080debug: false
Multiple annotations per key are supported. @require is checked first; when it fails on a nil value, @validate is skipped for that key.
Validation expressions
The @validate directive accepts Expr Language expressions evaluated by the choria-io/validator library. The value being validated is available as value in the expression.
Available expressions:
Expression
Description
isIP(value) or is_ip(value)
Valid IPv4 or IPv6 address
isIPv4(value) or is_ipv4(value)
Valid IPv4 address
isIPv6(value) or is_ipv6(value)
Valid IPv6 address
isInt(value) or is_int(value)
Integer value
isFloat(value) or is_float(value)
Floating-point value
isDuration(value) or is_duration(value)
Valid duration using fisk.ParseDuration
isRegex(value, "^[a-z]+$") or is_regex(...)
Value matches the given regular expression
isShellSafe(value) or is_shellsafe(value)
Does not contain shell-unsafe characters
isHostname(value) or is_hostname(value)
Valid hostname per RFC 1123
isFQDN(value) or is_fqdn(value)
Valid fully qualified domain name per RFC 1123
Multiple expressions can be combined using Expr operators: isIPv4(value) || isIPv6(value).
Non-string values are converted to their string representation before validation.
Behavior
Annotations are extracted from the data: section only. Comments in overrides: are ignored.
Validation runs against the final merged data. A base value of user: "" with @require passes when an override provides a non-empty value.
@validate on map values is skipped. Maps are validated by annotating their nested keys individually. @validate on array values is skipped.
Unrecognized directives starting with @ produce a warning, catching typos like @requiired.
JSON data sources do not support annotations because JSON has no comment syntax.
Templates
Applications need data to vary their behavior and configure their environments. Configuration management tools are no different.
Data can be used for:
Configuring resource names that differ between operating systems (e.g., httpd vs apache2)
Setting different configuration values depending on environment, role, or other dimensions
Deciding whether environments should have something installed (e.g., development vs production)
CCM supports various data sources:
System Facts - Operating system, networking, and disk configuration
Custom Facts - From facts.{yaml,json} and facts.d/*.{yaml,json} in system and user config directories
Environment - Variables from the shell environment and ./.env files
Hiera Data - Hierarchical data with overrides based on facts
Accessing data
Expressions like {{ lookup('facts.host.info.platformFamily') }} or ${ lookup('facts.host.info.platformFamily') } use the Expr Language.
Available variables
In templates, you have direct access to:
Variable
Description
Facts
System facts (e.g., Facts.host.info.platformFamily)
Data
Resolved Hiera data (e.g., Data.package_name)
Environ
Environment variables (e.g., Environ.HOME)
Available functions
Function
Description
lookup(key, default)
Lookup data using GJSON Path Syntax. Example: lookup("facts.host.info.os", "linux")
readFile(path), file(path)
Read a file into a string. Relative paths are resolved from the working directory, absolute paths are used as-is
template(f)
Parse f using templates. If f ends in .templ, reads the file first, if it ends in .jet calls the jet() function
jet(f), jet(f, "[[", "]]")
Parse f using Jet templates with optional custom delimiters. If f ends in .jet, reads the file first
GJSON path examples
The lookup() function uses GJSON path syntax for nested access:
This fetches package_name from the data and defaults to httpd if not found.
Facts
CCM includes a built-in fact resolver that gathers system information. To see available facts:
$ ccm facts # All facts as JSON
$ ccm facts host # Query specific path
$ ccm facts --yaml # Output as YAML
Access facts in expressions using ${ Facts.host.info.platformFamily } or ${ lookup('facts.host.info. platformFamily') }.
Custom facts
Custom facts are loaded from the system (/etc/choria/ccm/) and user (~/.config/choria/ccm/) configuration directories. Within each directory, facts are loaded in this order:
facts.json
facts.yaml
facts.d/*.{json,yaml} - files sorted by filename
Later sources override earlier ones, and user directory facts override system directory facts. This makes facts.d/ useful for modular or drop-in facts from other tools.
Files in facts.d/ are processed in lexicographic filename order, so 01-base.json is loaded before 02-override.json. Only files with .json or .yaml extensions are read; all other files (including .yml) are ignored.
Security
Facts directories are subject to the following security constraints:
Absolute paths only - configuration directories must be absolute paths; relative paths are rejected
Path cleaning - paths are normalized to remove traversal components (e.g., /../)
Symlinks ignored - symlinked files, symlinked facts.d/ directories, and symlinked entries within facts.d/ are all skipped
Hiera data for CLI
Hiera data is resolved using the Choria Hierarchical Data Resolver. By default, data is read from ./.hiera, or you can specify a file with --hiera.
Note
This applies to ccm ensure commands. The ccm apply command uses manifests that contain their own Hiera data.
Hiera data sources
Hiera data can be loaded from:
Local file: ./.hiera or path specified with --hiera
Key-Value store: --hiera kv://BUCKET/key (requires --context for NATS)
HTTP(S): --hiera https://example.com/data.yaml (supports Basic Auth via URL credentials)
Merge strategies
The hierarchy.merge setting controls how overrides are applied:
first (default): Stops at the first matching override
Running ccm ensure package '${ lookup("data.package_name") }' installs httpd on RHEL-based systems and apache2 on Debian-based systems.
Note
See the Hiera section for details on configuring Hiera data in NATS.
Environment
The shell environment and variables defined in ./.env can be accessed in two ways:
# Direct accesshome_dir: "${ Environ.HOME }"# Via lookup (with default value)my_var: "${ lookup('environ.MY_VAR', 'default') }"
The .env file uses standard KEY=value format, one variable per line.
Shell Usage
CCM is designed as a CLI-first tool. Each resource type has its own subcommand under ccm ensure, with required inputs as arguments and optional settings as flags.
The ccm ensure commands are idempotent, making them safe to run multiple times in shell scripts.
Use ccm --help and ccm <command> --help to explore available commands and options.
Managing a single resource
Managing a single resource is straightforward.
ccm ensure package zsh 5.8
This ensures the package zsh is installed at version 5.8.
To view the current state of a resource:
ccm status package zsh
Managing multiple resources
When managing multiple resources in a script, create a session first. The session records the outcome of each resource, enabling features like refreshing a service when a file changes.
CCM gathers system facts that can be used in templates and conditions:
# Show all facts as JSON
$ ccm facts
# Show facts as YAML
$ ccm facts --yaml
# Query specific facts using gjson syntax
$ ccm facts host.info.platformFamily
Resolving Hiera data
The ccm hiera command helps debug and test Hiera data resolution:
# Resolve a Hiera file with system facts
$ ccm hiera parse data.yaml -S
# Resolve with custom facts
$ ccm hiera parse data.yaml os=linux env=production
# Query a specific key from the result
$ ccm hiera parse data.yaml --query packages
Subsections of Shell Usage
JSON API
CCM provides a STDIN/STDOUT API for managing resources programmatically. This enables integration with external languages. Build DSLs in Ruby, Perl, Python, or any language that can execute processes and handle JSON or YAML.
Overview
The API uses a simple request/response pattern:
Send a request to ccm ensure api pipe via STDIN
Receive a response on STDOUT
Both JSON and YAML formats are supported for requests. The response format is always JSON, but can be explicitly set to YAML using --yaml.
Command
ccm ensure api pipe [--yaml] [--noop] [--facts <file>] [--data <file>]
Flag
Description
--yaml
Output response in YAML format instead of JSON
--noop
Dry-run mode; report what would change without making changes
--facts <file>
Load additional facts from a YAML file
--data <file>
Load Hiera-style data from a YAML file
Request format
Requests must include a protocol identifier, resource type, and properties.
For detailed information about each resource type and its properties, see the Resource Documentation
CLI Plugins
CCM supports extending the CLI with custom commands using App Builder. This allows you to create organization-specific workflows that integrate with the ccm command.
Plugin locations
CCM searches for plugins in two directories:
Location
Purpose
/etc/choria/ccm/plugins/
System-wide plugins
$XDG_CONFIG_HOME/choria/ccm/plugins/
User plugins (typically ~/.config/choria/ccm/plugins/)
Plugins in the user directory override system plugins with the same name.
Plugin file format
Plugin files must be named <command>-plugin.yaml. The filename determines the command name:
For complete App Builder documentation including all command types, templating features, and advanced options, see the App Builder documentation.
Since version 0.13.0 of App Builder it has a transform and a command that can invoke CCM Manifests, this combines well with flags, arguments and form wizards to create custom UI’s that manage your infrastructure.
YAML Manifests
A manifest is a YAML file that combines data, hierarchy configuration, and resources in a single file.
Manifests support template expressions but not procedural logic. Think of them as declarative configuration similar to multi-resource shell scripts.
A visual editor for manifests is available at CCM Studio.
Apply the manifest with ccm apply. The first run makes changes; subsequent runs are stable:
$ ccm apply manifest.yaml
INFO Creating new session record resources=1
WARN package#httpd changed ensure=latest runtime=3.560699287s provider=dnf
$ ccm apply manifest.yaml
INFO Creating new session record resources=1
INFO package#httpd stable ensure=latest runtime=293.448824ms provider=dnf
To preview the fully resolved manifest without applying it:
The first two files inherit the defaults values. The /app/bin/app file overrides just the mode. The /etc/motd file is a separate resource block, so defaults do not apply.
Templating
Manifests support template expressions like ${ lookup("key") } for adjusting values. These expressions cannot generate new resources; they only modify values in valid YAML.
Available variables
Templates have access to:
Variable
Description
Facts
System facts
Data
Resolved Hiera data
Environ
Environment variables
Generating resources with Jet templates
To dynamically generate resources from data, use Jet Templates.
If the required resource fails, the dependent resource is skipped.
Dry run (noop mode)
Preview changes without applying them:
ccm apply manifest.yaml --noop
Note
Noop mode cannot always detect cascading effects. If one resource change would affect a later resource, that dependency may not be reflected in the dry run.
Health check only mode
Run only health checks without applying resources:
ccm apply manifest.yaml --monitor-only
This is useful for verifying system state without making changes.
Manifests in NATS object store
Manifests can be stored in NATS Object Stores, avoiding the need to distribute files locally.
$ tar -C /tmp/manifest/ -cvzf /tmp/manifest.tgz .
$ nats obj put CCM manifest.tgz --context ccm
Apply the manifest:
$ ccm apply obj://CCM/manifest.tgz --context ccm
INFO Using manifest from Object Store in temporary directory bucket=CCM file=manifest.tgz
INFO file#/etc/motd stable ensure=present runtime=0s provider=posix
Manifests on web servers
Store gzipped tar archives on a web server and apply them directly:
$ ccm apply https://example.net/manifest.tar.gz
INFO Executing manifest manifest=https://example.net/manifest.tar.gz resources=1
INFO file#/etc/motd stable ensure=present runtime=0s provider=posix
Facts from these sources are merged with system facts, with command-line facts taking precedence.
Agent
Some configurations benefit from running YAML Manifests continuously. For example, dotfiles might be left unmanaged (allowing local modifications), while Docker should always be up to date.
The CCM Agent runs manifests continuously, loading them from local files, Object Storage, or HTTP(S) URLs, with Key-Value data overlaid.
The Agent also supports Registration, a service discovery system where resources that reach a stable state publish their details to NATS. Other nodes can query the registry in templates to build configurations dynamically.
Run modes
The agent supports two modes of operation that combine to be efficient and fast-reacting:
Full manifest apply: Manages the complete state of every resource
Health check mode: Runs only Monitoring checks, which can trigger a full manifest apply as remediation
By enabling both modes, you can run health checks very frequently (even at 10- or 20-second intervals) while keeping full Configuration Management runs less frequent (every few hours).
Enabling both modes is optional but recommended. Adding health checks to key resources is also recommended.
Supported manifest and data sources
Manifest sources
Manifests can be loaded from:
Local file: /path/to/manifest.yaml
Object Storage: obj://bucket/key.tar.gz
HTTP(S): https://example.com/manifest.tar.gz (supports Basic Auth via URL credentials)
Remote sources (Object Storage and HTTP) must be .tar.gz archives containing a manifest.yaml file, templates and file sources.
External data sources
For Hiera data resolution, the agent supports:
Local file: file:///path/to/data.yaml
Key-Value store: kv://bucket/key
HTTP(S): https://example.com/data.yaml
Logical flow
The agent continuously runs and manages manifests as follows:
At startup, the agent fetches data and gathers facts
Starts a worker for each manifest source
Each worker starts watchers to download and manage the manifest (polling every 30 seconds for remote sources)
Triggers workers at the configured interval for a full apply
Each run updates facts (minimum 2-minute interval) and data
Applies each manifest serially
Triggers workers at the configured health check interval
Health check runs do not update facts or data
Runs health checks for each manifest serially
If any health checks are critical (not warning), the agent triggers a full apply for that worker
In the background, object stores and HTTP sources are watched for changes. Updates trigger immediate apply runs with exponential backoff retry on failures.
Prometheus metrics
When monitor_port is configured, the agent exposes Prometheus metrics on /metrics. These metrics can be used to monitor agent health, track resource states and events, and observe health check statuses.
Agent metrics
Metric
Type
Labels
Description
choria_ccm_agent_apply_duration_seconds
Summary
manifest
Time taken to apply manifests
choria_ccm_agent_healthcheck_duration_seconds
Summary
manifests
Time taken for health check runs
choria_ccm_agent_healthcheck_remediations_count
Counter
manifest
Health checks that triggered remediation
choria_ccm_agent_data_resolve_duration_seconds
Summary
-
Time taken to resolve external data
choria_ccm_agent_data_resolve_error_count
Counter
url
Data resolution failures
choria_ccm_agent_facts_resolve_duration_seconds
Summary
-
Time taken to resolve facts
choria_ccm_agent_facts_resolve_error_count
Counter
-
Facts resolution failures
choria_ccm_agent_manifest_fetch_count
Counter
manifest
Remote manifest fetches
choria_ccm_agent_manifest_fetch_error_count
Counter
manifest
Remote manifest fetch failures
Resource metrics
Metric
Type
Labels
Description
choria_ccm_manifest_apply_duration_seconds
Summary
source
Time taken to apply an entire manifest
choria_ccm_resource_apply_duration_seconds
Summary
type, provider, name
Time taken to apply a resource
choria_ccm_resource_state_total_count
Counter
type, name
Total resources processed
choria_ccm_resource_state_stable_count
Counter
type, name
Resources in stable state
choria_ccm_resource_state_changed_count
Counter
type, name
Resources that changed
choria_ccm_resource_state_refreshed_count
Counter
type, name
Resources that were refreshed
choria_ccm_resource_state_failed_count
Counter
type, name
Resources that failed
choria_ccm_resource_state_error_count
Counter
type, name
Resources with errors
choria_ccm_resource_state_skipped_count
Counter
type, name
Resources that were skipped
choria_ccm_resource_state_noop_count
Counter
type, name
Resources in noop mode
Health check metrics
Metric
Type
Labels
Description
choria_ccm_healthcheck_duration_seconds
Summary
type, name, check
Time taken for health checks
choria_ccm_healthcheck_status_count
Counter
type, name, status, check
Health check results by status
Facts metrics
Metric
Type
Labels
Description
choria_ccm_facts_gather_duration_seconds
Summary
-
Time taken to gather system facts
Configuration
The agent is included in the ccm binary. To use it, create a configuration file and enable the systemd service.
The configuration file is located at /etc/choria/ccm/config.yaml:
# CCM Agent Configuration Example# Time between scheduled manifest apply runs.# Must be at least 30s. Defaults to 5m.interval: 5m# Time between health check runs.# When set, health checks run independently of apply runs and can trigger# remediation applies when critical issues are detected.# Omit to disable periodic health checks.health_check_interval: 1m# List of manifest sources to apply. Each source creates a separate worker.# Supported formats:# - Local file: /path/to/manifest.yaml# - Object store: obj://bucket/key.tar.gz# - HTTP(S): https://example.com/manifest.tar.gz# - HTTP with Basic Auth: https://user:pass@example.com/manifest.tar.gz# Remote sources must be .tar.gz archives containing a manifest.yaml file.manifests:
- /etc/choria/ccm/manifests/base.yaml - obj://ccm-manifests/app.tar.gz - https://config.example.com/manifests/web.tar.gz# Logging level: debug, info, warn, errorlog_level: info# NATS context for authentication. Defaults to 'CCM'.nats_context: CCM# Optional URL for external Hiera data resolution.# Supported formats: file://, kv://, http(s)://# The resolved data is merged into the manifest data context.external_data_url: kv://ccm-data/common# Directory for caching remote manifest sources.# Defaults to /etc/choria/ccm/source.cache_dir: /etc/choria/ccm/source# Port for Prometheus metrics endpoint (/metrics).# Set to 0 or omit to disable.monitor_port: 9100# Registration destination for service discovery.# Valid values: "nats" (fire-and-forget) or "jetstream" (reliable with rollup).# Omit to disable registration. See the Registration section for details.# registration: jetstream
After configuring, start the service:
ccm ensure service ccm-agent running --enable
Registration
The registration system publishes service discovery entries to NATS when managed resources reach a stable state. Resources that pass all health checks and are not in a failed state announce themselves to a shared registry. Other nodes discover these services dynamically through template lookups.
Supported Version
Added in version 0.0.20
Use cases
Dynamic load balancer configuration - Web server resources register themselves on successful deploy, and a file resource on the load balancer node uses template lookups to generate an upstream configuration
Service mesh discovery - Database and cache resources register their addresses and ports, allowing application nodes to build connection strings from live registry data
Canary deployments - New service instances register with lower priority values, allowing consumers to prefer established instances while gradually shifting traffic
Resource configuration
Any resource type can publish registration entries by adding register_when_stable to its properties. Each entry describes a service endpoint to advertise.
This manifest ensures Nginx is running, verifies it responds to health checks, and then registers the node as a web service in the production cluster.
Entry properties
Property
Template Key
Description
cluster (required)
Cluster
Logical cluster name, must match [a-zA-Z][a-zA-Z\d_-]*
service (required)
Service
Service name, must match [a-zA-Z][a-zA-Z\d_-]*
protocol (required)
Protocol
Protocol identifier, must match [a-zA-Z][a-zA-Z\d_-]*
address (required)
Address
IP address or hostname of the service endpoint
port (integer)
Port
Port number, between 1 and 65535
priority (required)
Priority
Priority value between 1 and 255, lower values indicate higher priority
ttl
TTL
Time-to-live for the entry: a duration (e.g., 10m, 1h) or never to disable expiry. Sets the Nats-TTL header
annotations (map)
Annotations
Arbitrary key-value metadata published with the entry
The cluster, address, port, and annotations fields support template expressions for dynamic values resolved at apply time.
How it works
Registration entries are published after a resource is applied. The following conditions must all be met:
The resource has one or more register_when_stable entries configured
The resource is not in noop mode
The resource apply did not fail
All health checks (if any) passed with OK status
Each entry is published to a NATS subject with the structure and may be persisted in a NATS stream.
Prerequisites
NATS stream
When using the jetstream destination, a JetStream stream must exist before registration entries can be published. Use ccm registration init to create or update the stream:
ccm registration init --replicas 3 --max-age 5m
This creates a stream named REGISTRATION (or updates it if it already exists) with the correct subject filter, rollup, and TTL settings. Use --registration (-R) to specify a different stream name and --context to select a NATS context.
The --max-age value should exceed the agent’s apply or health check interval to prevent entries from expiring between runs. It also controls how long deletion markers for expired entries are retained in the stream.
Agent configuration
Add the registration field to the agent configuration file at /etc/choria/ccm/config.yaml:
Valid values for registration are nats (core NATS, fire-and-forget) and jetstream (reliable delivery). Omitting the field disables registration.
Template lookups
Other resources can query the registration registry using the registrations() function in templates. This enables dynamic configuration based on what services are currently registered.
The function takes four string arguments and returns an array of matching registration entries. Each entry exposes the struct fields listed in the Template Key column of the Entry Properties table (e.g., Address, Port, Priority).
Any argument can be "*" to wildcard that position.
Note
The registrations() function queries JetStream and requires a NATS connection and a configured registration stream.
The registrations() function is available in all three template engines.
registrations('production', 'http', 'web', '*')
[[ range _, entry := registrations("production", "http", "web", "*") ]]
server [[ entry.Address ]]:[[ entry.Port ]] weight [[ 256 - entry.Priority ]]
[[ end ]]
{{ range $entry := registrations "production" "http" "web" "*" }}
server {{ $entry.Address }}:{{ $entry.Port }}
{{ end }}
Format transformers
The result of registrations() supports transformation methods that convert entries into formats consumed by other systems. These methods are callable from all three template engines.
Prometheus file service discovery
The PrometheusFileSD() method converts registration entries into the Prometheus file-based service discovery JSON format. Entries are grouped by cluster, service, and protocol into target groups.
Entries without a port are excluded from the output.
For entries where the service or protocol is prometheus, entries are included by default and only excluded if the prometheus.io/scrape annotation is explicitly set to something other than "true". For all other entries, the prometheus.io/scrape annotation must be explicitly set to "true" for the entry to be included.
Labels on each target group include cluster, service, protocol, and annotations that have non-empty values, do not start with __, and whose keys match [a-zA-Z_][a-zA-Z0-9_]*. Other annotations are silently skipped.
The ccm apply and ccm ensure file commands can query the registration registry by passing the --registration flag with the name of the JetStream stream holding registration data.
Apply a manifest that uses registrations() in its templates:
The create subcommand publishes a single registration entry to the JetStream stream. This is useful for manually registering services or for integration with external provisioning tools.
ccm registration create \
--cluster production \
--service web \
--protocol http \
--address 10.0.0.5 \
--port 8080
All entry properties are specified as flags:
Flag
Required
Default
Description
--cluster
Yes
Logical cluster name
--service
Yes
Service name
--protocol
Yes
Protocol identifier
--address
Yes
IP address of the service endpoint
--port
Yes
Port number (1-65535)
--priority
No
100
Priority value (1-255, lower is higher)
--ttl
No
30s
Time-to-live for the entry
-A/--annotation
No
Annotations as key=value, repeatable
Add annotations by repeating the -A flag:
ccm registration create \
--cluster production \
--service web \
--protocol http \
--address 10.0.0.5 \
--port 8080 \
--priority 10 \
--ttl 5m \
-A version=1.2.3 \
-A hostname=web-05
Removing entries
The rm subcommand (alias delete) purges a registration entry from the JetStream stream. All identifying fields must be specified to match the entry to remove.
ccm registration rm \
--cluster production \
--service web \
--protocol http \
--address 10.0.0.5 \
--port 8080
The command prompts for confirmation before removing the entry. Use --force to skip the confirmation prompt:
ccm registration rm \
--cluster production \
--service web \
--protocol http \
--address 10.0.0.5 \
--port 8080 \
--force
Flag
Required
Default
Description
--cluster
Yes
Logical cluster name
--service
Yes
Service name
--protocol
Yes
Protocol identifier
--address
Yes
IP address of the service endpoint
--port
Yes
Port number (1-65535)
--force
No
false
Skip confirmation prompt
Querying and watching
The ccm registration command provides tools for inspecting and monitoring registration data from the command line.
Both subcommands accept the same positional arguments for filtering: cluster, protocol, service, and address. All default to * (wildcard). The --context flag (env: NATS_CONTEXT) controls NATS authentication and defaults to CCM. The --registration (-R) flag sets the JetStream stream name and defaults to REGISTRATION.
Query
The query subcommand performs a point-in-time lookup of registered entries.
ccm registration query
This returns all entries across all clusters, protocols, services, and addresses. Filter by positional arguments:
ccm registration query production http web
Output is a human-readable listing grouped by service and cluster. Machine-readable formats are available with --json or --yaml:
ccm registration query production --json
ccm registration query production http web "*" --yaml
Results are sorted by cluster, then protocol, then service.
Watch
The watch subcommand subscribes to the registration stream and displays changes in real time. It runs until interrupted.
ccm registration watch
New and updated entries are logged as info-level messages. Entries removed by TTL expiry, deletion, or purge are logged as warnings including the removal reason.
Filter the watch to specific entries using the same positional arguments:
ccm registration watch production http web
The --json flag outputs each event as a JSON object for integration with other tools.
Full example
This example shows two nodes working together. A web server registers itself, and a load balancer discovers all web servers to generate its configuration.
Each time the agent runs on the load balancer, it queries the registry for all web services and regenerates the HAProxy configuration. The exec resource reloads HAProxy when the configuration file changes.
Monitoring
By default, CCM verifies resource health using the resource’s native state. For example, if a service should be running and systemd reports it as running, CCM considers it healthy.
For deeper validation, all resources support custom health checks. These checks run after a resource is managed and can verify that the resource is functioning correctly, not just present.
Each health check must specify either command or goss_rules – they are mutually exclusive. The format is auto-detected based on which field is set (nagios for command, goss for goss_rules), but can be overridden explicitly.
Nagios format
Health checks using command follow Nagios plugin conventions for exit codes:
The CLI supports a single health check per resource. For multiple health checks, use a manifest.
This example verifies that the web server responds with content containing “Acme Inc”. If the check fails, it retries up to 5 times with 1 second between attempts.
Goss format
Health checks using goss_rules embed Goss validation rules directly in the manifest. This validates system state, including running services, listening ports, file contents, and HTTP responses, without external check scripts.
The check result is OK when all Goss rules pass, or CRITICAL when any rule fails. See the Goss documentation for the full list of supported resource types and matchers.
This validates that the httpd service is running and enabled, port 80 is listening, and the web server responds with a 200 status containing “Acme Inc”.
Templates in Goss rules
Goss rules are processed through CCM’s own template engine before evaluation. This means you can use the standard {{ }} expression syntax with access to Facts, Data, and Environ, as well as the lookup(), template(), and jet() functions. See the Data section for full details on template syntax.
CCM resolves templates before passing rules to Goss. Goss’s own template variables are not used.
Agent integration
When running the Agent with health_check_interval configured, health checks run independently of full manifest applies. If any health check returns a CRITICAL status, the agent triggers a remediation apply for that manifest.
Design Documents
Design documents provide detailed implementation guidance for CCM’s resource types, providers, and internal components. They are intended for developers contributing to CCM or those seeking to understand specific implementation details.
For end-user documentation on how to use resources, see Resources.
Note
These design documents are largely written with AI assistance and reviewed before publication.
Contents
Each design document covers:
Purpose and scope: What the component does and its responsibilities
Architecture: How the component fits into CCM’s overall design
Implementation details: Key data structures, interfaces, and algorithms
Provider contracts: Requirements for implementing new providers
Testing considerations: How to test the component
Available Documents
Archive Type: Archive resource for downloading and extracting archives
Apply Type: Apply resource for composing manifests from reusable parts
Multiple actions are joined with “. " (e.g., “Would have downloaded. Would have extracted”).
Desired State Validation
After applying changes (in non-noop mode), the type verifies the archive reached the desired state by calling Status() again and checking all conditions. If validation fails, ErrDesiredStateFailed is returned.
Subsections of Archive Type
HTTP Provider
This document describes the implementation details of the HTTP archive provider for downloading and extracting archives from HTTP/HTTPS URLs.
Provider Selection
The HTTP provider is selected when:
The URL scheme is http or https
The archive file extension is supported (.tar.gz, .tgz, .tar, .zip)
The required extraction tool (tar or unzip) is available in PATH
The IsManageable() function checks these conditions and returns a priority of 1 if all are met.
Operations
Download
Process:
Parse the URL and add Basic Auth credentials if username/password provided
Create HTTP request with custom headers (if specified)
Execute GET request via util.HttpGetResponse()
Verify HTTP 200 status code
Create temporary file in the same directory as the target
Timing: Checksum is verified after download completes but before the atomic rename. This ensures:
Corrupted downloads are never placed at the target path
Temp file is cleaned up on mismatch
Clear error message with both expected and actual checksums
Security Considerations
Credential Handling
Credentials in URL are redacted in log messages via util.RedactUrlCredentials()
Basic Auth header is set by Go’s http.Request.SetBasicAuth(), not manually constructed
Archive Extraction
Extraction uses system tar/unzip commands
No path traversal protection beyond what the tools provide
ExtractParent must be an absolute path (validated in model)
Temporary Files
Created with os.CreateTemp() using pattern <archive-name>-*
Deferred removal ensures cleanup on all exit paths
Ownership set before content written
Platform Support
The provider is Unix-only due to:
Dependency on util.GetFileOwner() which uses syscall for UID/GID resolution
Dependency on util.ChownFile() for ownership management
Timeouts
Operation
Timeout
Configurable
HTTP Download
1 minute (default in HttpGetResponse)
No
Archive Extraction
1 minute
No
Large archives may require increased timeouts in future versions.
Apply Type
This document describes the design of the apply resource type for composing manifests from smaller reusable manifests.
Overview
The apply resource resolves and executes a child manifest within the parent manifest’s execution context. The child manifest shares the parent’s manager and session, allowing resource ordering and subscribe relationships across manifest boundaries.
Key behaviors:
Noop strengthening: A parent in noop mode forces all children into noop mode, regardless of the child’s noop property
Health check strengthening: Same semantics as noop; health check mode can only be strengthened, never weakened
Recursion depth limiting: Nested apply resources are capped at a configurable maximum depth (default 10) to prevent infinite loops
Transitive trust control: The allow_apply property prevents a child manifest from containing its own apply resources
Provider Interface
Apply providers must implement the ApplyProvider interface:
Exceeding the maximum depth returns an error before iterating any child resources.
Transitive Trust
The allow_apply property controls whether a child manifest may contain its own apply resources. When allow_apply is false, the child manifest is scanned for apply resources after resolution but before execution. If any are found, an error is returned.
This provides a mechanism to limit the trust boundary when including manifests authored by others.
allow_apply value
Child contains apply resources
Result
true (default)
Yes
Allowed
true (default)
No
Allowed
false
Yes
Error
false
No
Allowed
Data Handling
The data property provides key-value data to the child manifest. This data is passed through the WithOverridingResolvedData option and merged into the resolved data after the child manifest’s own data resolution.
External data (CLI overrides) always persists through the merge. The parent’s original data is restored after child execution via the state save/restore mechanism.
Subscribe Behavior
Apply resources support the standard subscribe property. Subscribe targets use the apply#name format:
The provider only strengthens noop mode, never weakens it. If the parent manager is already in noop mode, the child inherits that regardless of its own noop property. If the parent is not in noop mode and the resource sets noop: true, the provider enables noop on the manager before resolution.
Parent noop
Resource noop
Action
true
false
No change, parent noop already active
true
true
No change, parent noop already active
false
true
Enable noop on manager
false
false
No change
Health check mode follows the same strengthening pattern. The effective health check mode is true if either the parent or the resource sets it.
Execute Options:
The provider builds these options to control child manifest behavior:
Option
Condition
Purpose
WithSkipSession()
Always
Reuse parent session instead of creating a new one
WithCurrentDepth(n)
Always
Track recursion depth for nested apply resources
WithOverridingResolvedData
data property is set
Merge resource data into the child’s resolved data
WithDenyApplyResources()
allow_apply is false
Prevent child from containing apply resources
State Capture and Restore
The provider saves three pieces of manager state before manifest resolution and restores them after execution via defer. This ensures restoration runs even if resolution or execution fails.
Field
Capture
Restore
Noop mode
mgr.NoopMode()
mgr.SetNoopMode(saved)
Working directory
mgr.WorkingDirectory()
mgr.SetWorkingDirectory(saved)
Data
mgr.Data()
mgr.SetData(saved)
State capture happens before any resolve or mutation calls. This ordering is critical because ResolveManifestUrl mutates the manager’s working directory and data during resolution.
Restoration ensures that subsequent resources in the parent manifest see the original manager state. Without it, a child manifest’s working directory and data changes would leak into sibling resources.
Path Resolution
The resource name property specifies a file path relative to the parent manifest’s directory. During resolution, ResolveManifestFilePath joins relative paths with the manager’s current working directory before opening the file.
For nested apply resources, each level sets the working directory to its own manifest’s parent directory. The state restore ensures the working directory returns to the correct value after each child completes.
/opt/ccm/manifest.yaml WD = /opt/ccm/
apply: sub/manifest.yaml resolves to /opt/ccm/sub/manifest.yaml
WD = /opt/ccm/sub/
apply: lib/manifest.yaml resolves to /opt/ccm/sub/lib/manifest.yaml
WD = /opt/ccm/sub/lib/
(restore WD to /opt/ccm/sub/)
(restore WD to /opt/ccm/)
Child Resource Inspection
After execution, the provider iterates over child resources to count outcomes using the shared session:
Outcome
Detection method
Effect
Failed
mgr.IsResourceFailed
Increment fail count
Changed
mgr.ShouldRefresh
Increment change count
Skipped
Neither
Remainder
The provider builds an ApplyState with the total resource count and reports the outcome:
Child result
Provider behavior
All resources succeeded
Log informational message, return state
Some resources changed
Log warning with counts, return state
Any resource failed
Log error, return error with failure count
Logging
The provider creates a child user logger with a manifest key set to the resource name. All child resource log output includes this key, providing attribution for which parent apply resource triggered the execution.
Exec Type
This document describes the design of the exec resource type for executing commands.
Overview
The exec resource executes commands with idempotency controls:
Creates: Skip execution if a file exists
OnlyIf / Unless: Guard commands that gate execution based on exit code
Refresh Only: Only execute when triggered by a subscribed resource
Exit Codes: Validate success via configurable return codes
Provider Interface
Exec providers must implement the ExecProvider interface:
Subscribe takes precedence over all other idempotency checks - if a subscribed resource changed, the command executes regardless of creates file existence or guard command results.
Exit Code Validation
By default, exit code 0 indicates success. The returns property customizes acceptable codes:
Queries current state normally (checks creates file)
Evaluates guard commands (onlyif/unless) - these run even in noop mode
Evaluates subscribe triggers
Logs what actions would be taken
Sets appropriate NoopMessage:
“Would have executed”
“Would have executed via subscribe”
Reports Changed: true if execution would occur
Does not call provider Execute method
Desired State Validation
After execution (in non-noop mode), the type verifies success:
func (t*Type) isDesiredState(properties, status) bool {
// Creates file check takes precedenceifproperties.Creates!=""&&status.CreatesSatisfied {
returntrue }
// Guard checks only apply before execution (ExitCode is nil)ifstatus.ExitCode==nil {
ifproperties.OnlyIf!=""&& !status.OnlyIfSatisfied {
returntrue// onlyif guard failed, don't run }
ifproperties.Unless!=""&&status.UnlessSatisfied {
returntrue// unless guard succeeded, don't run }
}
// Refresh-only without execution is stableifstatus.ExitCode==nil&&properties.RefreshOnly {
returntrue }
// Check exit code against acceptable returnsreturns:= []int{0}
if len(properties.Returns) > 0 {
returns = properties.Returns }
ifstatus.ExitCode!=nil {
returnslices.Contains(returns, *status.ExitCode)
}
returnfalse}
Guard checks are gated on ExitCode == nil because after execution, the exit code determines success. The post-execution isDesiredState() call must not re-evaluate guards, which would produce incorrect results since guard state is only set on initialStatus.
If the exit code is not in the acceptable returns list, an ErrDesiredStateFailed error is returned.
Command vs Name
The command property is optional. If not specified, the name is used as the command:
# These are equivalent:- exec:
- /usr/bin/myapp --config /etc/myapp.conf:
- exec:
- run-myapp:
command: /usr/bin/myapp --config /etc/myapp.conf
Using a descriptive name with explicit command is recommended for clarity.
Environment and Path
Commands can be configured with custom environment:
This document describes the implementation details of the Posix exec provider for executing commands without a shell.
Provider Selection
The Posix provider is the default exec provider. It is always available and returns priority 1 for all exec resources unless a different provider is explicitly requested via the provider property.
To use the shell provider instead, specify provider: shell in the resource properties.
Comparison with Shell Provider
Feature
Posix
Shell
Shell invocation
No
Yes (/bin/sh -c)
Pipes (|)
Not supported
Supported
Redirections (>, <)
Not supported
Supported
Shell builtins (cd, export)
Not supported
Supported
Glob expansion
Not supported
Supported
Command substitution ($(...))
Not supported
Supported
Argument parsing
shellquote.Split()
Passed as single string
Security
Lower attack surface
Shell injection possible
When to use Posix (default):
Simple commands with arguments
When shell features are not needed
For better security (no shell injection risk)
When to use Shell:
Commands with pipes, redirections, or shell builtins
Complex command strings
When shell expansion is required
Operations
Execute
Process:
Determine command source (Command property or Name if Command is empty)
Parse command string into words using shellquote.Split()
Extract command (first word) and arguments (remaining words)
Execute via CommandRunner.ExecuteWithOptions()
Optionally log output line-by-line if LogOutput is enabled
Command Parsing:
The command string is parsed using github.com/kballard/go-shellquote, which handles:
Syntax
Example
Result
Simple words
echo hello world
["echo", "hello", "world"]
Single quotes
echo 'hello world'
["echo", "hello world"]
Double quotes
echo "hello world"
["echo", "hello world"]
Escaped spaces
echo hello\ world
["echo", "hello world"]
Mixed quoting
echo "it's a test"
["echo", "it's a test"]
Execution Options:
Option
Source
Description
Command
First word after parsing
Executable path or name
Args
Remaining words
Command arguments
Cwd
properties.Cwd
Working directory
Environment
properties.Environment
Additional env vars (KEY=VALUE format)
Path
properties.Path
Search path for executables
Timeout
properties.ParsedTimeout
Maximum execution time
Output Logging:
When LogOutput: true is set and a user logger is provided:
The model validates exec properties before execution:
Property
Validation
name
Must be parseable by shellquote (balanced quotes)
timeout
Must be valid duration format (e.g., 30s, 5m)
subscribe
Each entry must be type#name format
path
Each directory must be absolute (start with /)
environment
Each entry must be KEY=VALUE format with non-empty key and value
Platform Support
The Posix provider works on all platforms supported by Go’s os/exec package. It does not use any platform-specific system calls directly.
The command runner (model.CommandRunner) handles the actual process execution, which may have platform-specific implementations.
Security Considerations
No Shell Injection
Unlike the shell provider, the posix provider does not invoke a shell. Arguments are passed directly to the executable, preventing shell injection attacks:
# Safe with posix provider - $USER is passed literally, not expanded- exec:
- /bin/echo $USER:
provider: posix # Default# Potentially dangerous with shell provider - $USER is expanded- exec:
- /bin/echo $USER:
provider: shell
Path Validation
The path property only accepts absolute directory paths, preventing path traversal via relative paths.
Environment Validation
Environment variables must have non-empty keys and values, preventing injection of empty or malformed entries.
Shell Provider
This document describes the implementation details of the Shell exec provider for executing commands via /bin/sh.
Provider Selection
The Shell provider is selected when provider: shell is explicitly specified in the resource properties. It has a lower priority (99) than the Posix provider (1), so it is never automatically selected.
Availability: The provider checks for the existence of /bin/sh via util.FileExists(). If /bin/sh does not exist, the provider is not available.
Comparison with Posix Provider
Feature
Shell
Posix
Shell invocation
Yes (/bin/sh -c)
No
Pipes (|)
Supported
Not supported
Redirections (>, <, >>)
Supported
Not supported
Shell builtins (cd, export, source)
Supported
Not supported
Glob expansion (*.txt, ?)
Supported
Not supported
Command substitution ($(...), `...`)
Supported
Not supported
Variable expansion ($VAR, ${VAR})
Supported
Not supported
Logical operators (&&, ||)
Supported
Not supported
Argument parsing
Passed as single string
shellquote.Split()
Security
Shell injection possible
Lower attack surface
When to use Shell:
Commands with pipes: cat file.txt | grep pattern | sort
Commands with redirections: echo "data" > /tmp/file
Commands with shell builtins: cd /tmp && pwd
Commands with variable expansion: echo $HOME
Complex one-liners with logical operators
When to use Posix (default):
Simple commands with arguments
When shell features are not needed
For better security (no shell injection risk)
Operations
Execute
Process:
Determine command source (Command property or Name if Command is empty)
Validate command is not empty
Execute via CommandRunner.ExecuteWithOptions() with /bin/sh -c "<command>"
Optionally log output line-by-line if LogOutput is enabled
Execution Method:
The entire command string is passed to the shell as a single argument:
/bin/sh -c "<entire command string>"
This allows the shell to interpret all shell syntax, including:
Pipes and redirections
Variable expansion
Glob patterns
Command substitution
Logical operators
Execution Options:
Option
Value
Description
Command
/bin/sh
Shell executable path
Args
["-c", "<command>"]
Shell flag and command string
Cwd
properties.Cwd
Working directory
Environment
properties.Environment
Additional env vars (KEY=VALUE format)
Path
properties.Path
Search path for executables
Timeout
properties.ParsedTimeout
Maximum execution time
Output Logging:
When LogOutput: true is set and a user logger is provided:
The shell provider uses the same idempotency mechanisms as the posix provider:
Creates File
If creates is specified and the file exists, the command does not run:
- exec:
- extract-archive:
command: cd /opt && tar -xzf /tmp/app.tar.gzprovider: shellcreates: /opt/app/bin/app
Guard Commands
If onlyif is specified, the command only runs when the guard exits 0. If unless is specified, the command only runs when the guard exits non-zero. Guard commands are executed via /bin/sh -c and can use shell features:
This allows manifests bundled with their source files to use relative paths.
Noop Mode
In noop mode, the file type:
Queries current state normally
Computes content checksums
Logs what actions would be taken
Sets appropriate NoopMessage:
“Would have created the file”
“Would have created directory”
“Would have removed the file”
Reports Changed: true if changes would occur
Does not call provider Store/CreateDirectory methods
Does not remove files
Desired State Validation
After applying changes (in non-noop mode), the type verifies the file reached the desired state by calling Status() again and checking all attributes match. If validation fails, ErrDesiredStateFailed is returned.
Subsections of File Type
Posix Provider
This document describes the implementation details of the Posix file provider for managing files and directories on Unix-like systems.
Provider Selection
The Posix provider is the default and only file provider. It is always available and returns priority 1 for all file resources.
Operations
Store (Create/Update File)
Process:
Verify parent directory exists
Parse file mode from octal string
Open source file if source property is set
Create temporary file in the same directory as target
Set file permissions on temp file
Write content (from source file or contents property)
This allows manifests to use relative paths for source files bundled with the manifest.
Platform Support
The Posix provider uses Unix-specific system calls:
Operation
System Call
Get file owner/group
syscall.Stat_t (UID/GID from stat)
Set ownership
os.Chown() → chown(2)
Set permissions
os.Chmod() → chmod(2)
The provider has separate implementations for Unix and Windows (file_unix.go, file_windows.go in internal/util), with Windows returning errors for ownership operations.
Security Considerations
Atomic Writes
Files are written atomically via temp file + rename. This prevents:
Partial file reads during write
Corruption if process is interrupted
Race conditions with concurrent readers
Permission Ordering
Permissions and ownership are set on the temp file before rename:
Chmod - Set permissions
Write content
Chown - Set ownership
Rename to target
This ensures the file never exists at the target path with incorrect permissions.
Path Validation
File paths must be absolute and clean (no . or .. components):
iffilepath.Clean(p.Name) !=p.Name {
returnfmt.Errorf("file path must be absolute")
}
Required Properties
Owner, group, and mode are required properties and cannot be empty, preventing accidental creation of files with default/inherited permissions.
Package Type
This document describes the design of the package resource type for managing software packages.
Overview
The package resource manages software packages with two aspects:
Existence: Whether the package is installed or absent
Version: The specific version installed (when applicable)
Provider Interface
Package providers must implement the PackageProvider interface:
┌─────────────────────────────────────────┐
│ Get current state via Status() │
└─────────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Is ensure = "latest"? │
└─────────────────┬───────────────────────┘
Yes │ No
▼ │
┌─────────────────────┐ │
│ Is package absent? │ │
└─────────────┬───────┘ │
Yes │ No │
▼ ▼ │
┌────────┐ ┌────────┐
│Install │ │Upgrade │
│latest │ │latest │
└────────┘ └────────┘
│
▼
┌─────────────────────────┐
│ Is desired state met? │
└─────────────┬───────────┘
Yes │ No
▼ │
┌───────────┐ │
│ No change │ ▼
└───────────┘ (Phase 2)
Phase 2: Handle Ensure Values
┌─────────────────────────┐
│ What is desired ensure? │
└─────────────┬───────────┘
│
┌───────────────────────┼───────────────────────┐
│ absent │ present │ <version>
▼ ▼ ▼
┌────────────┐ ┌───────────────┐ ┌───────────────┐
│ Uninstall │ │ Is absent? │ │ Is absent? │
└────────────┘ └───────┬───────┘ └───────┬───────┘
Yes │ No Yes │ No
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────────┐
│Install │ │No │ │Install │ │Compare │
│ │ │change │ │version │ │versions │
└────────┘ └────────┘ └────────┘ └─────┬──────┘
│
┌────────────────┼────────────────┐
│ current < │ current = │ current >
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Upgrade │ │ No change │ │ Downgrade │
└───────────┘ └───────────┘ └───────────┘
Version Comparison
The VersionCmp method compares two version strings:
Return Value
Meaning
-1
versionA < versionB (upgrade needed)
0
versionA == versionB (no change)
1
versionA > versionB (downgrade needed)
Version comparison is delegated to the provider, allowing platform-specific version parsing (e.g., RPM epoch handling, Debian revision suffixes).
Idempotency
The package resource is idempotent through state comparison:
Decision Table
Desired
Current State
Action
ensure: present
installed (any version)
None
ensure: present
absent
Install
ensure: absent
absent
None
ensure: absent
installed
Uninstall
ensure: latest
absent
Install latest
ensure: latest
installed
Upgrade (always runs)
ensure: <version>
same version
None
ensure: <version>
older version
Upgrade
ensure: <version>
newer version
Downgrade
ensure: <version>
absent
Install
Special Case: ensure: latest
When ensure: latest is used:
The package manager determines what “latest” means
Upgrade is always called when the package exists (package manager is idempotent)
The type cannot verify if “latest” was achieved (package managers may report stale data)
Desired state validation only checks that the package is not absent
Package Name Validation
Package names are validated to prevent injection attacks:
Allowed Characters:
Alphanumeric (a-z, A-Z, 0-9)
Period (.), underscore (_), plus (+)
Colon (:), tilde (~), hyphen (-)
Rejected:
Shell metacharacters (;, |, &, $, etc.)
Whitespace
Quotes and backticks
Path separators
Version strings (when ensure is a version) are also validated for dangerous characters.
Noop Mode
In noop mode, the package type:
Queries current state normally
Computes version comparison
Logs what actions would be taken
Sets appropriate NoopMessage:
“Would have installed latest”
“Would have upgraded to latest”
“Would have installed version X”
“Would have upgraded to X”
“Would have downgraded to X”
“Would have uninstalled”
Reports Changed: true if changes would occur
Does not call provider Install/Upgrade/Downgrade/Uninstall methods
Desired State Validation
After applying changes (in non-noop mode), the type verifies the package reached the desired state:
func (t*Type) isDesiredState(properties, state) bool {
switchproperties.Ensure {
case"present":
// Any installed version is acceptablereturnstate.Ensure!="absent"case"absent":
returnstate.Ensure=="absent"case"latest":
// Cannot verify "latest", just check not absentreturnstate.Ensure!="absent"default:
// Specific version must matchreturnVersionCmp(state.Ensure, properties.Ensure, false) ==0 }
}
If the desired state is not reached, an ErrDesiredStateFailed error is returned.
Subsections of Package Type
APT Provider
This document describes the implementation details of the APT package provider for Debian-based systems.
Environment
All commands are executed with the following environment variables to ensure non-interactive operation:
Variable
Value
Purpose
DEBIAN_FRONTEND
noninteractive
Prevents dpkg from prompting for user input
APT_LISTBUGS_FRONTEND
none
Suppresses apt-listbugs prompts
APT_LISTCHANGES_FRONTEND
none
Suppresses apt-listchanges prompts
Concurrency
A global package lock (model.PackageGlobalLock) is held during all command executions to prevent concurrent apt/dpkg operations within the same process. This prevents lock contention on /var/lib/dpkg/lock.
Helper methods: LessThan, GreaterThan, Equal, etc.
DNF Provider
This document describes the implementation details of the DNF package provider for RHEL/Fedora-based systems.
Concurrency
A global package lock (model.PackageGlobalLock) is held during all command executions to prevent concurrent dnf/rpm operations within the same process. This prevents lock contention on the RPM database.
Target directory must exist with rendered template files
absent
Managed files must be removed from the target
Template Engines
Two template engines are supported:
Engine
Library
Default Delimiters
Description
go
Go text/template
{{ / }}
Standard Go templates
jet
Jet templating
[[ / ]]
Jet template language
The engine defaults to jet if not specified. Delimiters can be customized via left_delimiter and right_delimiter properties.
Properties
Property
Type
Required
Description
source
string
Yes
Source template directory path or URL
engine
string
No
Template engine: go or jet (default: jet)
skip_empty
bool
No
Skip empty files in rendered output
left_delimiter
string
No
Custom left template delimiter
right_delimiter
string
No
Custom right template delimiter
purge
bool
No
Remove files in target not present in source
data
map[string]any
No
Custom data that replaces Hiera data for template rendering
post
[]map[string]string
No
Post-processing: glob pattern to command mapping
# Render configuration templates using Jet engine- scaffold:
- /etc/app:
ensure: presentsource: templates/appengine: jetpurge: true# Render with Go templates and custom delimiters- scaffold:
- /etc/myservice:
ensure: presentsource: templates/myserviceengine: goleft_delimiter: "<<"right_delimiter: ">>"# With post-processing commands- scaffold:
- /opt/app:
ensure: presentsource: templates/apppost:
- "*.go": "go fmt {}"# With custom data replacing Hiera data- scaffold:
- /etc/app:
ensure: presentsource: templates/appengine: jetdata:
app_name: myappversion: "{{ Facts.version }}"port: 8080
Custom Data
The data property allows supplying custom data that completely replaces the Hiera-resolved data for template rendering. When data is set and non-empty, templates receive only the custom data via data instead of the Hiera-resolved data from the manifest.
This is useful when a scaffold resource needs data that differs from or is unrelated to the global Hiera data, or when you want to provide a self-contained data set for a specific scaffold.
Behavior
When data is not set or empty: templates receive the Hiera-resolved data from the manager as normal.
When data is set and non-empty: env.Data is replaced with the custom data before calling Status() and Scaffold(). The custom data is used consistently throughout the entire apply cycle.
facts remain available regardless of whether custom data is provided.
Template Resolution in Data Values
String values in the data map support template expressions that are resolved during property template resolution. Both keys and values can contain templates:
Non-string values (integers, booleans, lists, maps) are preserved as-is without template resolution.
Apply Logic
┌─────────────────────────────────────────┐
│ Get template environment from manager │
└─────────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Custom data set? Override env.Data │
└─────────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Get current state via Status() │
└─────────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Is current state desired state? │
└─────────────────┬───────────────────────┘
Yes │ No
▼ │
┌───────────┐ │
│ No change │ │
└───────────┘ │
▼
┌─────────────────────────┐
│ What is desired ensure? │
└─────────────┬───────────┘
│
┌───────────────┴───────────────┐
│ absent │ present
▼ ▼
┌───────────┐ ┌───────────┐
│ Noop? │ │ Noop? │
└─────┬─────┘ └─────┬─────┘
Yes │ No Yes │ No
▼ │ ▼ │
┌────────────┐│ ┌────────────┐│
│ Set noop ││ │ Set noop ││
│ message ││ │ message ││
└────────────┘│ └────────────┘│
▼ ▼
┌───────────────┐ ┌─────────────────────┐
│ Remove all │ │ Scaffold │
│ managed files │ │ (render templates) │
│ and empty dirs│ │ │
└───────────────┘ └─────────────────────┘
Idempotency
The scaffold resource determines idempotency by rendering templates in noop mode and comparing results against the target directory.
State Checks
Ensure absent: Target must not exist, or no managed files remain on disk (Changed and Stable lists empty). Purged files (files not belonging to the scaffold) do not affect this check.
Ensure present: The Changed list must be empty, and the Purged list must be empty when purge is enabled (all files are stable). When purge is disabled, purged files do not affect stability.
Decision Table
For ensure: absent, purged files never affect stability since they don’t belong to the scaffold. For ensure: present, purged files only affect stability when purge is enabled.
When ensure: absent, the Status method filters Changed and Stable lists to only include files that actually exist on disk, so the state reflects reality after removal rather than what the scaffold would create.
Desired
Target Exists
Changed Files
Purged Files
Purge Enabled
Stable?
absent
No
N/A
N/A
N/A
Yes
absent
Yes
None
Any
N/A
Yes (no managed files on disk)
absent
Yes
Some
Any
N/A
No (managed files remain)
present
Yes
None
None
Any
Yes
present
Yes
None
Some
No
Yes (purged files ignored)
present
Yes
None
Some
Yes
No (purge needed)
present
Yes
Some
Any
Any
No (render needed)
present
No
N/A
N/A
Any
No (target missing)
Source Resolution
The source property is resolved relative to the manager’s working directory when it is a relative path:
This allows manifests bundled with template directories to use relative paths. URL sources (with a scheme) are passed through unchanged.
Path Validation
Target paths (the resource name) must be:
Absolute (start with /)
Canonical (no . or .. components, filepath.Clean(path) == path)
Post-Processing
The post property defines commands to run on rendered files. Each entry is a map where the key is a glob pattern matched against the file’s basename and the value is a command to execute. Use {} as a placeholder for the file’s full path; if omitted, the path is appended as the last argument.
Post-processing runs immediately after each file is rendered. Validation ensures neither keys nor values are empty.
Noop Mode
In noop mode, the scaffold type queries the current state via Status() and reports what would change without modifying the filesystem. Neither Scaffold() nor Remove() are called.
For ensure: present, the affected count is the number of changed files plus purged files (when purge is enabled). For ensure: absent, the affected count is the number of changed and stable files plus purged files (when purge is enabled).
Desired
Affected Count
Message
present
Changed + Purged (if purge enabled)
Would have changed N scaffold files
absent
Changed + Stable + Purged (if purge enabled)
Would have removed N scaffold files
Changed is set to true only when the affected count is greater than zero. When the resource is already in the desired state, Changed is false and NoopMessage is empty.
Desired State Validation
After applying changes (in non-noop mode), the type verifies the scaffold reached the desired state by checking the changed and purged file lists. If validation fails, ErrDesiredStateFailed is returned.
Subsections of Scaffold Type
Choria Provider
This document describes the implementation details of the Choria scaffold provider for rendering template directories using the choria-io/scaffold library.
Provider Selection
The Choria provider is the default and only scaffold provider. It is always available and returns priority 1 for all scaffold resources.
Operations
Scaffold (Render Templates)
Process:
Check if target directory exists
Configure scaffold with source, target, engine, delimiters, post-processing, and skip_empty settings
Create scaffold instance using the appropriate engine (scaffold.New() for Go, scaffold.NewJet() for Jet)
Call Render() (real mode) or RenderNoop() (noop mode)
Categorize results into changed, stable, and purged file lists
Scaffold Configuration:
Config Field
Source Property
Description
TargetDirectory
Name
Target directory for rendered files
SourceDirectory
Source
Source template directory
MergeTargetDirectory
(always true)
Merge into existing target directory
Post
Post
Post-processing commands
SkipEmpty
SkipEmpty
Skip empty rendered files
CustomLeftDelimiter
LeftDelimiter
Custom template left delimiter
CustomRightDelimiter
RightDelimiter
Custom template right delimiter
Engine Selection:
Engine
Constructor
Default Delimiters
go
scaffold.New()
{{ / }}
jet
scaffold.NewJet()
[[ / ]]
Result Categorization:
Scaffold Action
Metadata List
Description
FileActionEqual
Stable
File content unchanged
FileActionAdd
Changed
New file created
FileActionUpdate
Changed
Existing file modified
FileActionRemove
Purged
File removed from target
File paths in the metadata lists are absolute paths, constructed by joining the target directory with the relative path from the scaffold result.
Purge Behavior:
When purge is enabled and a file has FileActionRemove, the provider deletes the file from disk during Scaffold(). In noop mode, the removal is logged but not performed. When purge is disabled, purged files are only tracked in metadata and not removed.
Status
Process:
Perform a dry-run render (noop mode) to determine what the scaffold would do
When ensure is absent, filter Changed and Stable lists to only include files that actually exist on disk
The noop render reports what would happen if the scaffold were applied. For ensure: present, this is the desired output — it shows what needs to change. For ensure: absent, the raw render output is misleading after removal (it would show files to be added), so the lists are filtered to reflect what managed files actually remain on disk.
State Detection:
Target Directory
Ensure Value
Metadata
Exists
present
Changed, stable, and purged file lists from render
Exists
absent
Changed and stable filtered to files on disk, purged from render
Does not exist
Any
Empty metadata, TargetExists: false
Remove
Process:
Collect managed files from the state’s Changed and Stable lists (purged files are not removed as they don’t belong to the scaffold)
Stop when no more empty directories can be removed
Best-effort removal of the target directory (only succeeds if empty)
File Removal Order:
Files are collected from two metadata lists:
Changed - Files that were added or modified
Stable - Files that were unchanged
Purged files are not removed because they are unrelated to the scaffold and may belong to other processes.
Directory Cleanup:
For each removed file:
Track its parent directory
Repeat:
For each tracked directory:
Skip if it is the target directory itself
Skip if not empty
Remove the directory
Track its parent directory
Until no more directories removed
Best-effort: remove the target directory (fails silently if not empty)
The target directory is removed if empty after all managed files and subdirectories are cleaned up. If unrelated files remain (purged files), the directory is preserved.
Error Handling:
Condition
Behavior
Non-absolute file path
Return error immediately
File removal fails
Log error, continue with remaining files
Directory removal fails
Log error, continue with remaining directories
File does not exist
Silently skip (os.IsNotExist check)
Target directory removal fails
Log at debug level, no error returned
Template Environment
Templates receive the full templates.Env environment, which provides access to:
facts - System facts for the managed node
data - Hiera-resolved configuration data, or custom data when the resource’s data property is set
Template helper functions
When the scaffold resource has a data property set, env.Data is replaced with the custom data before the provider’s Status() and Scaffold() methods are called. The provider receives the already-resolved environment and does not need to handle this override itself.
This allows templates to generate host-specific configurations based on facts and hierarchical data.
Logging
The provider wraps the CCM logger in a scaffold-compatible interface:
This adapter translates the scaffold library’s Debugf/Infof calls to CCM’s structured logging.
Platform Support
The Choria provider is platform-independent. It uses the choria-io/scaffold library for template rendering, which operates on standard filesystem operations. No platform-specific system calls are used.
Service Type
This document describes the design of the service resource type for managing system services.
Overview
The service resource manages system services with two independent dimensions:
Running state: Whether the service is currently running or stopped
Enabled state: Whether the service starts automatically at boot
These are managed independently, allowing combinations like “running but disabled” or “stopped but enabled”.
Provider Interface
Service providers must implement the ServiceProvider interface:
The Status method returns a ServiceState containing:
typeServiceStatestruct {
CommonResourceStateMetadata*ServiceMetadata}
typeServiceMetadatastruct {
Namestring// Service nameProviderstring// Provider name (e.g., "systemd")Enabledbool// Whether service starts at bootRunningbool// Whether service is currently running}
The Ensure field in CommonResourceState is set to:
If the desired state is not reached, an ErrDesiredStateFailed error is returned.
Service Name Validation
Service names are validated to prevent injection attacks:
Allowed Characters:
Alphanumeric (a-z, A-Z, 0-9)
Period (.), underscore (_), plus (+)
Colon (:), tilde (~), hyphen (-)
Rejected:
Shell metacharacters (;, |, &, etc.)
Whitespace
Path separators
Noop Mode
In noop mode, the service type:
Queries current state normally
Logs what actions would be taken
Sets appropriate NoopMessage (e.g., “Would have started”, “Would have enabled”)
Reports Changed: true if changes would occur
Does not call provider Start/Stop/Restart/Enable/Disable methods
Subsections of Service Type
Systemd Provider
This document describes the implementation details of the Systemd service provider for managing system services via systemctl.
Provider Selection
The Systemd provider is selected when systemctl is found in the system PATH. The provider checks for the executable using util.ExecutableInPath("systemctl").
Availability Check:
Searches PATH for systemctl
Returns priority 1 if found
Returns unavailable if not found
Concurrency
A global service lock (model.ServiceGlobalLock) is held during all systemctl command executions to prevent concurrent systemd operations within the same process. This prevents race conditions when multiple service resources are managed simultaneously.
The provider performs a systemctl daemon-reload once per provider instance before any service operations. This ensures systemd picks up any unit file changes made by other resources (e.g., file resources managing unit files).
Integration points - Factory functions and registry
CLI commands - User-facing command line interface
JSON schemas - Validation for manifests and API requests
Documentation - User and design documentation
CCM Studio - Web-based manifest designer
File Checklist
File
Action
Purpose
model/resource_<type>.go
Create
Properties, state, metadata structs
model/resource_<type>_test.go
Create
Property validation tests
model/resource.go
Modify
Add case to factory function
resources/<type>/<type>.go
Create
Provider interface definition
resources/<type>/type.go
Create
Resource type implementation
resources/<type>/type_test.go
Create
Resource type tests
resources/<type>/provider_mock_test.go
Generate
Mock provider for tests
resources/<type>/<provider>/factory.go
Create
Provider factory
resources/<type>/<provider>/<provider>.go
Create
Provider implementation
resources/<type>/<provider>/<provider>_test.go
Create
Provider tests
resources/resources.go
Modify
Add case to NewResourceFromProperties
cmd/ensure_<type>.go
Create
CLI command handler
cmd/ensure.go
Modify
Register CLI command
internal/fs/schemas/manifest.json
Modify
Add resource schema definitions
internal/fs/schemas/resource_ensure_request.json
Modify
Add API request schema
docs/content/resources/<type>.md
Create
User documentation
docs/content/design/<type>/_index.md
Create
Design documentation
docs/content/design/<type>/<provider>.md
Create
Provider documentation
Step 1: Model Definitions
Create model/resource_<type>.go with the following components.
Constants
const (
// ResourceStatus<Type>Protocol is the protocol identifier for <type> resource stateResourceStatus<Type>Protocol = "io.choria.ccm.v1.resource.<type>.state"// <Type>TypeName is the type name for <type> resources <Type>TypeName = "<type>")
Properties Struct
The properties struct must satisfy model.ResourceProperties:
type <Type>ResourcePropertiesstruct {
CommonResourceProperties`yaml:",inline"`// All string fields are automatically template-resolved by default.// Use struct tags to control resolution behavior.Urlstring`json:"url" yaml:"url"`Checksumstring`json:"checksum,omitempty" yaml:"checksum,omitempty"`// Fields that must not be template-resolvedDelimiterstring`json:"delimiter,omitempty" yaml:"delimiter,omitempty" template:"-"`// Fields deferred until after control evaluationContentstring`json:"content,omitempty" yaml:"content,omitempty" template:"deferred"`// ...}
Key points:
Embed CommonResourceProperties with yaml:",inline" tag
Use JSON and YAML struct tags for serialization
In Validate(), call p.CommonResourceProperties.Validate() first, then add type-specific validation
Template resolution is handled automatically via reflection - see the Template Resolution section for details
ResolveDeferredTemplates() is called after control evaluation (if/unless). Override it only if you have template:"deferred" fields that need post-processing (e.g. filepath.Clean). The default no-op from CommonResourceProperties is sufficient for most types. See the file resource for an example where Contents and Source are deferred
State Struct
The state struct must satisfy model.ResourceState:
type <Type>Metadatastruct {
Namestring`json:"name" yaml:"name"`Providerstring`json:"provider,omitempty" yaml:"provider,omitempty"`// Add fields describing current system state}
type <Type>Statestruct {
CommonResourceStateMetadata*<Type>Metadata`json:"metadata,omitempty"`}
Embedding *base.Base provides implementations for Apply(), Healthcheck(), Type(), Name(), Properties(), and NewTransactionEvent(). The type must implement:
See resources/archive/type.go for a complete constructor example.
ApplyResource Method
The ApplyResource method (part of base.EmbeddedResource) contains the core logic. It should follow this pattern:
Get initial state via provider.Status()
Check if already in desired state (implement isDesiredState() helper)
If stable, call t.FinalizeState() and return early
Apply changes, respecting t.mgr.NoopMode()
Get final state and verify desired state was achieved
Call t.FinalizeState() with appropriate flags
See resources/archive/type.go:ApplyResource() for a complete example.
Provider Selection Methods
The SelectProvider() method should use registry.FindSuitableProvider() to select an appropriate provider. See resources/archive/type.go for the standard implementation pattern.
if !noop {
// Make actual changest.log.Info("Applying changes")
err = p.SomeAction(ctx, properties)
} else {
t.log.Info("Skipping changes as noop")
noopMessage = "Would have applied changes"}
Error Handling
Use sentinel errors from model/errors.go:
var (
ErrResourceInvalid = errors.New("resource invalid")
ErrProviderNotFound = errors.New("provider not found")
ErrNoSuitableProvider = errors.New("no suitable provider")
ErrDesiredStateFailed = errors.New("desired state not achieved")
)
Wrap errors with context:
err:=os.Remove(path)
iferr!=nil {
returnfmt.Errorf("could not remove file: %w", err)
}
Template Resolution
Template resolution uses a reflection-based struct walker (templates.ResolveStructTemplates) that automatically resolves {{ expression }} placeholders in all string-typed fields. The walker recurses into all composite types including slices, maps, nested structs, and pointer fields.
By default, all fields are template-resolved. You control behavior with the template struct tag:
Tag
Behavior
(none)
Resolved during ResolveTemplates() (phase 1)
template:"-"
Never resolved - use for enum values, literal delimiters, resource references, or fields evaluated separately (like control expressions)
template:"deferred"
Skipped in phase 1, resolved during ResolveDeferredTemplates() (phase 2, after control evaluation)
template:"resolve_keys"
For map fields, also resolve map keys (rebuilds the map). By default only map values are resolved
Fields tagged json:"-" are automatically skipped (these are internal computed fields like ParsedTimeout).
Supported types (resolved recursively):
string and named string types (e.g. type MyType string)
[]string, []any
map[string]string, map[string]any, map[string][]string, and other map variants with string keys
[]map[string]string, []map[string]any
Nested and embedded structs, *struct pointers
any / interface{} fields holding any of the above
Arbitrary nesting depth
Types that are not resolved: bool, int, float, time.Duration, []byte / yaml.RawMessage, nil pointers.
Implementation pattern - most resource types need only:
The resolveRegistrations call (inherited from CommonResourceProperties) handles RegisterWhenStable entries which need special typed resolution for the Port field.
Deferred resolution is used for fields whose template evaluation may fail when the resource would be skipped by a control (if/unless). Tag these fields with template:"deferred" and override ResolveDeferredTemplates():
This method is called by base.Base after control evaluation passes, so templates are only evaluated for resources that will actually be applied. Because deferred resolution happens at apply time rather than during manifest parsing, templates using functions like file() can access content created by earlier resources in the same run. The default no-op implementation inherited from CommonResourceProperties is sufficient for types that have no template:"deferred" fields.
Provider Selection
Providers declare manageability via IsManageable on the factory (see model.ProviderFactory in Step 3). Multiple providers can match; the one with highest priority is selected.
Documentation
Create user documentation in docs/content/resources/<type>.md covering:
Overview and use cases
Ensure states table
Properties table with descriptions
Usage examples (manifest, CLI, API)
Create design documentation in docs/content/design/<type>/_index.md covering:
Provider interface specification
State checking logic
Apply logic flowchart
Create provider documentation in docs/content/design/<type>/<provider>.md covering:
Provider selection criteria
Platform requirements
Implementation details
CCM Studio
CCM Studio is a web-based manifest designer. After adding a new resource type, update CCM Studio to support it:
Note
CCM Studio is a closed-source project. The maintainers will complete this step.
Add the new resource type to the resource palette
Create property editors for type-specific fields
Add validation matching the JSON schema definitions
Update any resource type documentation or help text
Docs Style Guide
This guide describes the writing conventions used throughout the CCM documentation. Follow these rules when adding or editing pages.
All sections apply to every documentation page. The Page structure section applies only to resource reference pages under resources/.
Voice and tone
Write in plain, direct North American English.
Use the present tense and active voice: “The service resource manages system services,” not “System services are managed by the service resource.”
Address the reader implicitly. Do not use “you” or “we”. State facts and give instructions: “Specify commands with their full path,” not “You should specify commands with their full path.”
Keep sentences short. One idea per sentence.
Do not editorialize or use filler (“Note that,” “It is important to,” “Simply”).
Do not use emojis.
Do not use em dashes. Use commas, periods, or semicolons instead.
Page structure
Every resource page follows this order:
Front matter: TOML (+++) with title, description, toc = true, and weight.
Opening paragraph: One or two sentences stating what the resource does.
Callout: A warning or note about common pitfalls, using > [!info] syntax.
Primary example: A tabbed block (Manifest / CLI / API Request) showing typical usage.
Brief explanation: One or two sentences describing what the example does.
Ensure values: Table of valid ensure states.
Properties: Table of all properties with short descriptions.
Additional sections: Provider notes, idempotency, authentication, behavioral details as needed.
Not every example needs all three tabs. Secondary examples deeper in a page may show only the most relevant format.
YAML
Use realistic but minimal values.
Quote version strings and octal modes: "5.9", "0644".
CLI
Use nohighlight as the fence language.
Use backslash continuations for long commands.
Add a brief comment above the command when context is needed.
JSON
Use json as the fence language.
Always include the protocol and type fields in API examples.
Callouts
Use the > [!info] blockquote syntax for warnings and notes:
> [!info] Warning
> Use absolute file paths and primary group names.
> [!info] Note
> The provider will not run `apt update` before installing a package.
Use Warning for constraints the reader must follow to avoid errors. Use Note for supplementary information. A custom label may replace Warning or Note when it adds clarity, such as > [!info] Default Hierarchy.
Descriptions and explanations
After a tabbed example block, add one or two sentences explaining what the example does and why.
Describe behavior, not implementation: “The command runs only if /tmp/hello does not exist,” not “The code checks whether the file exists and skips execution if found.”
When describing how multiple options interact, use a truth table.
Terminology
Use “resource,” “provider,” “property,” “manifest” consistently.
Refer to ensure states and property names in backticks: present, name, ensure.
Reference other resources using the type#name notation in backticks: package#httpd.
When cross-referencing other documentation pages, use relative Hugo links.
General formatting
No trailing whitespace.
One blank line between sections.
No blank line between a heading and its first paragraph.
Wrap inline code, file paths, command names, property names, and values in backticks.
Do not use bold or italic for emphasis in reference content. Reserve bold for definition list terms within prose.