This documentation and template serve as a reference for implementing a module operator for integration with Lifecycle Manager. It utilizes the kubebuilder framework with some modifications to implement Kubernetes APIs for Custom Resource Definitions (CRDs). Additionally, it hides Kubernetes boilerplate code to develop fast and efficient control loops in Go.
- Template Operator
Before going in-depth, make sure you are familiar with:
This guide serves as a comprehensive step-by-step tutorial on properly creating a module from scratch using the operator that installs the Kubernetes YAML resources.
NOTE: While other approaches are encouraged, there are no dedicated guides available yet. These will follow with sufficient requests and the adoption of Kyma modularization.
Every Kyma module using the operator follows five basic principles:
- Is declared as available for use in a release channel through the ModuleTemplate custom resource (CR) in Control Plane
- Is declared as the desired state within the Kyma CR in the runtime or Control Plane
- Is installed or managed in the runtime by Lifecycle Manager through the Manifest CR in Control Plane
- Owns at least one CRD that defines the contract towards a runtime administrator and configures its behavior
- Operates on at most one runtime at any given time
Release channels let the customers try new modules and features early and decide when to apply the updates. For more information, see the release channels documentation in the modularization overview.
The following rules apply to the channel naming:
- Lowercase letters from a to z.
- The total length is between 3 and 32 characters.
If you are planning to migrate a pre-existing module within Kyma, read the transition plan for existing modules.
WARNING: For all use cases in the guide, you need a cluster for end-to-end testing outside your envtest integration test suite. It's HIGHLY RECOMMENDED that you follow this guide for a smooth development process. This is a good alternative if you do not want to use the entire Control Plane infrastructure and still want to test your operators properly.
- A provisioned Kubernetes cluster and OCI registry
- kubectl
- kubebuilder
Use one of the following options to install kubebuilder:
```bash
brew install kubebuilder
```
```bash
curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/
```
- modulectl
- An OCI registry to host OCI image
- You can use a local registry provided by k3d or use the Google Container Registry (GCR).
-
Initialize the
kubebuilder
project. Make sure the domain is set tokyma-project.io
. Execute the following command in thetest-operator
folder.kubebuilder init --domain kyma-project.io --repo github.com/kyma-project/test-operator --project-name=test-operator --plugins=go/v4-alpha
-
Create the API group version and kind for the intended CR(s). Make sure
group
is set tooperator
.kubebuilder create api --group operator --version v1alpha1 --kind Sample --resource --controller --make
-
Run
make manifests
to generate respective CRDs. -
Set up a basic kubebuilder operator with appropriate scaffolding.
If the module operator is deployed under the same namespace with other operators, differentiate your resources by adding common labels.
-
Add
commonLabels
to defaultkustomization.yaml
. See reference implementation. -
Include all resources (for example, manager.yaml) that contain label selectors by using
commonLabels
.
Further reading: Kustomize Built-In commonLabels
- Refer to State requirements and similarly include them in your
Status
sub-resource.
This Status
sub-resource must contain all valid State
(.status.state
) values to be compliant with the Kyma ecosystem.
package v1alpha1
// Status defines the observed state of Module CR.
type Status struct {
// State signifies current state of Module CR.
// Value can be one of ("Ready", "Processing", "Error", "Deleting").
// +kubebuilder:validation:Required
// +kubebuilder:validation:Enum=Processing;Deleting;Ready;Error
State State `json:"state"`
}
Include the State
values in your Status
sub-resource, either through inline reference or direct inclusion. These values have literal meaning behind them, so use them properly.
-
Optionally, you can add additional fields to your
Status
sub-resource. For instance,Conditions
are added to Sample CR in the API definition. This also includes the requiredState
values, using an inline reference. See the following Sample CR reference implementation.package v1alpha1 // Sample is the Schema for the samples API type Sample struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec SampleSpec `json:"spec,omitempty"` Status SampleStatus `json:"status,omitempty"` } type SampleStatus struct { Status `json:",inline"` // Conditions contain a set of conditionals to determine the State of Status. // If all Conditions are met, State is expected to be in StateReady. Conditions []metav1.Condition `json:"conditions,omitempty"` // add other fields to status subresource here }
-
Run
make generate manifests
to generate boilerplate code and manifests.
WARNING: This sample implementation is only for reference. You can copy parts of it but do not add this repository as a dependency to your project.
-
Implement
State
handling to represent the corresponding state of the reconciled resource by following the kubebuilder guidelines on how to implement controllers. -
Refer to the Sample CR controller implementation for setting the appropriate
State
andConditions
values to yourStatus
sub-resource.
The Sample CR is reconciled to install or uninstall a list of rendered resources from a YAML file on the file system.
r.setStatusForObjectInstance(ctx, objectInstance, status.
WithState(v1alpha1.StateReady).
WithInstallConditionStatus(metav1.ConditionTrue, objectInstance.GetGeneration()))
- The reference controller implementations listed above use Server-Side Apply instead of conventional methods to process resources on the target cluster. You can leverage parts of this logic to implement your own controller logic. Check out functions inside these controllers for state management and other implementation details.
- Connect to your cluster and ensure kubectl is pointing to the desired cluster.
- Install CRDs with
make install
WARNING: This installs a CRD on your cluster, so create your cluster before running theinstall
command. See Prerequisites for details on the cluster setup. - Local setup: install your module CR on a cluster and execute
make run
to start your operator locally.
WARNING: Note that while
make run
fully runs your controller against the cluster, it is not feasible to compare it to a productive operator. This is mainly because it runs with a client configured with privileges derived from yourKUBECONFIG
environment variable. For in-cluster configuration, see Guide on RBAC Management.
Ensure you have appropriate authorizations assigned to your controller binary before running it inside a cluster (not locally with make run
).
The Sample CR controller implementation includes RBAC generation (via kubebuilder) for all resources across all API groups.
Adjust it according to the chart manifest resources and reconciliation types.
Towards the earlier stages of your operator development, RBACs can accommodate all resource types and adjust them later according to your requirements.
package controllers
// TODO: dynamically create RBACs! Remove line below.
//+kubebuilder:rbac:groups="*",resources="*",verbs="*"
REMEMBER: Run
make manifests
after this adjustment for it to take effect.
WARNING: This step requires the working OCI registry. See Prerequisites.
- Include the static module data in your Dockerfile:
FROM gcr.io/distroless/static:nonroot WORKDIR / COPY module-data/ module-data/ COPY --from=builder /workspace/manager . USER 65532:65532 ENTRYPOINT ["/manager"]
The sample module data in this repository includes a YAML manifest in the module-data/yaml
directories.
Reference the YAML manifest directory with the spec.resourceFilePath
attribute of the Sample CR.
The example CRs in the config/samples
directory already reference the mentioned directories.
Feel free to organize the static data differently. The included module-data
directory serves just as an example.
You may also decide not to include any static data at all. In that case, you must provide the controller with the YAML data at runtime using other techniques, such as Kubernetes volume mounting.
- If necessary, build and push your module operator binary by adjusting
IMG
and running the inbuilt kubebuilder commands. Assuming your operator image has the following base settings:
- is hosted at
op-kcp-registry.localhost:8888/unsigned/operator-images
- controller image name is
sample-operator
- controller image has version
0.0.1
you can run the following command:
make docker-build docker-push IMG="op-kcp-registry.localhost:8888/unsigned/operator-images/sample-operator:0.0.1"
This builds the controller image and then pushes it as the image defined in IMG
based on the kubebuilder targets.
WARNING: This step requires the working OCI Registry, cluster, and Kyma CLI. See Prerequisites.
-
Generate the CRDs and resources for the module from the
default
kustomization into a manifest file using the following command:make build-manifests
You can use this file as a manifest for the module configuration in the next step.
Furthermore, make sure the settings from Prepare and Build Module Operator Image for single-cluster mode, and the following module settings are applied:
-
is hosted at
op-kcp-registry.localhost:8888/unsigned
-
for a k3d registry, the
insecure
flag (http
instead ofhttps
for registry communication) is enabled -
modulectl in
$PATH
undermoduletcl
is used -
the default sample under
config/samples/operator.kyma-project.io_v1alpha1_sample.yaml
has been adjusted to be a valid CRWARNING: The settings above reflect your default configuration for a module. To change them, adjust them manually to a different configuration. You can also define multiple files in
config/samples
, but you must specify the correct file during the bundling. -
.gitignore
has been adjusted and the following ignores have been added# generated dummy charts charts # template generated by kyma create module template.yaml
-
To configure the module, adjust the file
module-config.yaml
, located at the root of the repository.The following fields are available for the configuration of the module:
name
: (Required) The name of the module.version
: (Required) The version of the module.manifest
: (Required) Reference to the manifest, must be a URL or a local file path.repository
: (Required) Reference to the repository, must be a URL.documentation
: (Required) Reference to the documentation, must be a URL.icons
: (Required) Icons used for UI.defaultCR
: (Optional) Reference to a YAML file containing the default CR for the module, must be a URL or a local file path.mandatory
: (Optional) Default=false, indicates whether the module is mandatory to be installed on all clusters.security
: (Optional) Reference to a YAML file containing the security scanners config, must be a local file path.labels
: (Optional) Additional labels for the generated ModuleTemplate CR.annotations
: (Optional) Additional annotations for the generated ModuleTemplate CR.manager
: (Optional) Module resource that indicates the installation readiness of the module, typically the manager deployment of the module.associatedResources
: (Optional) Resources that should be cleaned up with the module deletion.resources
: (Optional) Additional resources of the module that may be fetched.requiresDowntime
: (Optional) Default=false, indicates whether the module requires downtime to support maintenance windows during module upgrades.namespace
: (Optional) Default=kcp-system, the namespace where the ModuleTemplate will be deployed.internal
: (Optional) Default=false, indicates whether the module is internal.beta
: (Optional) Default=false, indicates whether the module is beta.
**CAUTION:** This field was deprecated at the end of July 2024 and will be deleted in the next [Lifecycle Manager](https://github.com/kyma-project/lifecycle-manager/tree/main/) API versions.
An example configuration:
name: kyma-project.io/module/template-operator version: v1.0.0 channel: regular manifest: template-operator.yaml
-
Run the following command to create the module configured in
module-config.yaml
and push your module operator image to the specified registry:modulectl create --insecure --registry op-kcp-registry.localhost:8888/unsigned --module-config-file module-config.yaml
WARNING: For external registries (for example, Google Container/Artifact Registry) never use
insecure
. Instead, specify credentials. You can find more details in the CLI help documentation.
-
Verify that the module creation succeeded and observe the generated
template.yaml
file. It will contain the ModuleTemplate CR and descriptor of the component underspec.descriptor.component
.component: componentReferences: [] labels: - name: security.kyma-project.io/scan value: enabled version: v1 name: kyma-project.io/module/template-operator provider: '{"name":"kyma-project.io","labels":[{"name":"kyma-project.io/built-by","value":"modulectl","version":"v1"}]}' repositoryContexts: - baseUrl: http://op-kcp-registry.localhost:8888/unsigned componentNameMapping: urlPath type: ociRegistry resources: - access: imageReference: europe-docker.pkg.dev/kyma-project/prod/template-operator:1.0.2 type: ociArtifact labels: - name: scan.security.kyma-project.io/type value: third-party-image version: v1 name: template-operator relation: external type: ociArtifact version: 1.0.2 - access: localReference: sha256:4d17ea40fc5f5b1451cb2f23491510df10f79e629fcf5617fed0234dc766de59 mediaType: application/x-yaml referenceName: metadata type: localBlob digest: hashAlgorithm: SHA-256 normalisationAlgorithm: genericBlobDigest/v1 value: 4d17ea40fc5f5b1451cb2f23491510df10f79e629fcf5617fed0234dc766de59 name: metadata relation: local type: plainText version: 1.0.2 - access: localReference: sha256:9dc9ee3a3adfa21d5971044f7acc6700c058e26d8e6c5d81e87e47f2a1d17b24 mediaType: application/x-tar referenceName: raw-manifest type: localBlob digest: hashAlgorithm: SHA-256 normalisationAlgorithm: genericBlobDigest/v1 value: 9dc9ee3a3adfa21d5971044f7acc6700c058e26d8e6c5d81e87e47f2a1d17b24 name: raw-manifest relation: local type: directoryTree version: 1.0.2 - access: localReference: sha256:abfaffddba9c4121d17e77f972c4b4f6d363cb7199394ba3df3e34915bedd8ac mediaType: application/x-tar referenceName: default-cr type: localBlob digest: hashAlgorithm: SHA-256 normalisationAlgorithm: genericBlobDigest/v1 value: abfaffddba9c4121d17e77f972c4b4f6d363cb7199394ba3df3e34915bedd8ac name: default-cr relation: local type: directoryTree version: 1.0.2 sources: - access: commit: 5ee2e6397de7245e031b81dc26a50ef9006f0483 repoUrl: https://github.com/kyma-project/template-operator type: gitHub labels: - name: git.kyma-project.io/ref value: HEAD version: v1 - name: scan.security.kyma-project.io/rc-tag value: "" version: v1 - name: scan.security.kyma-project.io/language value: golang-mod version: v1 - name: scan.security.kyma-project.io/dev-branch value: "" version: v1 - name: scan.security.kyma-project.io/subprojects value: "" version: v1 - name: scan.security.kyma-project.io/exclude value: '**/test/**,**/*_test.go' version: v1 name: module-sources type: Github version: 1.0.2 version: 1.0.2
The CLI created various layers that are referenced in the blobs
directory. For more information on the layer structure, check the module creation with modulectl create --help
.
You can extend the operator further by using automated dashboard generation for Grafana.
Use the following command to generate two Grafana dashboard files with the controller-related metrics in the /grafana
folder:
kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha
To import Grafana dashboard, read the official Grafana guide. This feature is supported by the kubebuilder Grafana plugin.
The operator ecosystem around Kyma is complex, and it might become troublesome to debug issues in case your module is not installed correctly. For this reason, here are some best practices on how to debug modules developed using this guide.
- Verify the Kyma installation state is
Ready
by verifying all conditions.JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.reason}:{@.status};{end}{end}' \ && kubectl get kyma -o jsonpath="$JSONPATH" -n kcp-system
- Verify the Manifest installation state is ready by verifying all conditions.
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get manifest -o jsonpath="$JSONPATH"-n kcp-system
- Depending on your issue, observe the deployment logs from either Lifecycle Manager or Module Manager. Make sure that no errors have occurred.
Usually, the issue is related to either RBAC configuration (for troubleshooting minimum privileges for the controllers, see our dedicated RBAC section), misconfigured image, module registry or ModuleTemplate.
As a last resort, make sure that you are running within a single-cluster or a dual-cluster setup, watch out for any steps with a WARNING
specified and retry with a freshly provisioned cluster.
For cluster provisioning, make sure to follow the recommendations for clusters mentioned in our Prerequisites for this guide.
Lastly, if you are still unsure, open an issue with a description and steps to reproduce. We will be happy to help you with a solution.
For global usage of your module, the generated template.yaml
from Build and Push your Module to the Registry must be registered in our Control Plane.
This relates to Phase 2 of the module transition plane. Please be patient until we provide you with a stable guide on integrating your template.yaml
properly with an automated test flow into the central Control Plane offering.
See the Contributing Rules.
See the Code of Conduct document.
See the license file.