From 816ebe99c207ddba1e53ad47164687750d109796 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Tue, 28 Mar 2023 15:43:35 +0200 Subject: [PATCH 01/11] refactored usage guide --- docs/modules/kafka/pages/dependencies.adoc | 9 - .../pages/getting_started/first_steps.adoc | 2 +- docs/modules/kafka/pages/index.adoc | 19 +- .../configuration-environment-overrides.adoc | 63 ++++ .../pages/{ => usage-guide}/discovery.adoc | 1 + .../kafka/pages/usage-guide/index.adoc | 2 + .../kafka/pages/usage-guide/logging.adoc | 18 + .../kafka/pages/usage-guide/monitoring.adoc | 4 + .../pages/usage-guide/pod-placement.adoc | 22 ++ .../kafka/pages/usage-guide/security.adoc | 161 ++++++++ .../pages/usage-guide/storage-resources.adoc | 44 +++ docs/modules/kafka/pages/usage.adoc | 348 ------------------ docs/modules/kafka/partials/nav.adoc | 12 +- 13 files changed, 336 insertions(+), 369 deletions(-) delete mode 100644 docs/modules/kafka/pages/dependencies.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc rename docs/modules/kafka/pages/{ => usage-guide}/discovery.adoc (97%) create mode 100644 docs/modules/kafka/pages/usage-guide/index.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/logging.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/monitoring.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/pod-placement.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/security.adoc create mode 100644 docs/modules/kafka/pages/usage-guide/storage-resources.adoc delete mode 100644 docs/modules/kafka/pages/usage.adoc diff --git a/docs/modules/kafka/pages/dependencies.adoc b/docs/modules/kafka/pages/dependencies.adoc deleted file mode 100644 index 26b66bb2..00000000 --- a/docs/modules/kafka/pages/dependencies.adoc +++ /dev/null @@ -1,9 +0,0 @@ -= Dependencies - -== ZooKeeper - -Kafka currently requires ZooKeeper for coordination purposes. - -NOTE: This will change https://cwiki.apache.org/confluence/display/KAFKA/KIP-500[in the future]. - -Which means a reference to an existing ZooKeeper ensemble must be provided. diff --git a/docs/modules/kafka/pages/getting_started/first_steps.adoc b/docs/modules/kafka/pages/getting_started/first_steps.adoc index 4d03179b..df303a48 100644 --- a/docs/modules/kafka/pages/getting_started/first_steps.adoc +++ b/docs/modules/kafka/pages/getting_started/first_steps.adoc @@ -30,7 +30,7 @@ include::example$getting_started/getting_started.sh[tag=install-zookeeper] Create a file `kafka-znode.yaml` with the following content: -[source,bash] +[source,yaml] ---- include::example$getting_started/kafka-znode.yaml[] ---- diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 400daf66..5a5e1ff1 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -4,15 +4,20 @@ This is an operator for Kubernetes that can manage https://kafka.apache.org/[Apa WARNING: This operator only works with images from the https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fkafka[Stackable] repository += Dependencies + +== ZooKeeper + +Kafka currently requires ZooKeeper for coordination purposes. + +NOTE: This will change https://cwiki.apache.org/confluence/display/KAFKA/KIP-500[in the future]. + +Which means a reference to an existing ZooKeeper ensemble must be provided. + + + == Supported Versions The Stackable Operator for Apache Kafka currently supports the following versions of Kafka: include::partial$supported-versions.adoc[] - -== Getting the Docker image - -[source] ----- -docker pull docker.stackable.tech/stackable/kafka: ----- diff --git a/docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc b/docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc new file mode 100644 index 00000000..7f3fcecf --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/configuration-environment-overrides.adoc @@ -0,0 +1,63 @@ += Configuration & Environment Overrides + +The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role). + +IMPORTANT: Overriding certain properties which are set by operator (such as the ports) can interfere with the operator and can lead to problems. + +== Configuration Properties + +For a role or role group, at the same level of `config`, you can specify: `configOverrides` for the `server.properties`. For example, if you want to set the `auto.create.topics.enable` to disable automatic topic creation, it can be configured in the `KafkaCluster` resource like so: + +[source,yaml] +---- +brokers: + roleGroups: + default: + configOverrides: + server.properties: + auto.create.topics.enable: "false" + replicas: 1 +---- + +Just as for the `config`, it is possible to specify this at role level as well: + +[source,yaml] +---- +brokers: + configOverrides: + server.properties: + auto.create.topics.enable: "false" + roleGroups: + default: + replicas: 1 +---- + +All override property values must be strings. + +For a full list of configuration options we refer to the Apache Kafka https://kafka.apache.org/documentation/#configuration[Configuration Reference]. + +== Environment Variables + +In a similar fashion, environment variables can be (over)written. For example per role group: + +[source,yaml] +---- +servers: + roleGroups: + default: + envOverrides: + MY_ENV_VAR: "MY_VALUE" + replicas: 1 +---- + +or per role: + +[source,yaml] +---- +servers: + envOverrides: + MY_ENV_VAR: "MY_VALUE" + roleGroups: + default: + replicas: 1 +---- diff --git a/docs/modules/kafka/pages/discovery.adoc b/docs/modules/kafka/pages/usage-guide/discovery.adoc similarity index 97% rename from docs/modules/kafka/pages/discovery.adoc rename to docs/modules/kafka/pages/usage-guide/discovery.adoc index cbf22694..23224f22 100644 --- a/docs/modules/kafka/pages/discovery.adoc +++ b/docs/modules/kafka/pages/usage-guide/discovery.adoc @@ -3,6 +3,7 @@ :brokerPort: 9092 = Discovery +:page-aliases: discovery.adoc The Stackable Operator for Apache Kafka publishes a discovery https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmap-v1-core[`ConfigMap`], which exposes a client configuration bundle that allows access to the Apache Kafka cluster. diff --git a/docs/modules/kafka/pages/usage-guide/index.adoc b/docs/modules/kafka/pages/usage-guide/index.adoc new file mode 100644 index 00000000..4d220d38 --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/index.adoc @@ -0,0 +1,2 @@ += Usage guide +:page-aliases: usage.adoc \ No newline at end of file diff --git a/docs/modules/kafka/pages/usage-guide/logging.adoc b/docs/modules/kafka/pages/usage-guide/logging.adoc new file mode 100644 index 00000000..a19ccd95 --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/logging.adoc @@ -0,0 +1,18 @@ += Log aggregation + +The logs can be forwarded to a Vector log aggregator by providing a discovery +ConfigMap for the aggregator and by enabling the log agent: + +[source,yaml] +---- +spec: + clusterConfig: + vectorAggregatorConfigMapName: vector-aggregator-discovery + brokers: + config: + logging: + enableVectorAgent: true +---- + +Further information on how to configure logging, can be found in +xref:home:concepts:logging.adoc[]. diff --git a/docs/modules/kafka/pages/usage-guide/monitoring.adoc b/docs/modules/kafka/pages/usage-guide/monitoring.adoc new file mode 100644 index 00000000..6be233ea --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/monitoring.adoc @@ -0,0 +1,4 @@ += Monitoring + +The managed Kafka instances are automatically configured to export Prometheus metrics. See +xref:home:operators:monitoring.adoc[] for more details. diff --git a/docs/modules/kafka/pages/usage-guide/pod-placement.adoc b/docs/modules/kafka/pages/usage-guide/pod-placement.adoc new file mode 100644 index 00000000..a21b902f --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/pod-placement.adoc @@ -0,0 +1,22 @@ += Pod Placement + +You can configure Pod placement for Kafka brokers as described in xref:concepts:pod_placement.adoc[]. + +By default, the operator configures the following Pod placement constraints: + +[source,yaml] +---- +affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: broker + app.kubernetes.io/instance: cluster-name + app.kubernetes.io/name: kafka + topologyKey: kubernetes.io/hostname + weight: 70 +---- + +In the example above `cluster-name` is the name of the Kafka custom resource that owns this Pod. diff --git a/docs/modules/kafka/pages/usage-guide/security.adoc b/docs/modules/kafka/pages/usage-guide/security.adoc new file mode 100644 index 00000000..710a0d26 --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/security.adoc @@ -0,0 +1,161 @@ += Security + +== Encryption + +The internal and client communication can be encrypted TLS. This requires the xref:secret-operator::index.adoc[Secret Operator] to be present in order to provide certificates. The utilized certificates can be changed in a top-level config. + +[source,yaml] +---- +--- +apiVersion: kafka.stackable.tech/v1alpha1 +kind: KafkaCluster +metadata: + name: simple-kafka +spec: + image: + productVersion: 3.3.1 + stackableVersion: "23.4.0-rc2" + clusterConfig: + zookeeperConfigMapName: simple-kafka-znode + tls: + serverSecretClass: tls # <1> + internalSecretClass: kafka-internal-tls # <2> + brokers: + roleGroups: + default: + replicas: 3 +---- +<1> The `spec.clusterConfig.tls.serverSecretClass` refers to the client-to-server encryption. Defaults to the `tls` secret. Can be deactivated by setting `serverSecretClass` to `null`. +<2> The `spec.clusterConfig.tls.internalSecretClass` refers to the broker-to-broker internal encryption. This must be explicitly set or defaults to `tls`. May be disabled by setting `internalSecretClass` to `null`. + +The `tls` secret is deployed from the xref:secret-operator::index.adoc[Secret Operator] and looks like this: + +[source,yaml] +---- +--- +apiVersion: secrets.stackable.tech/v1alpha1 +kind: SecretClass +metadata: + name: tls +spec: + backend: + autoTls: + ca: + secret: + name: secret-provisioner-tls-ca + namespace: default + autoGenerate: true +---- + +You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or `spec.clusterConfig.tls.internalSecretClass` to use different certificates. + +== Authentication + +The internal or broker-to-broker communication is authenticated via TLS. In order to enforce TLS authentication for client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the xref:commons-operator::index.adoc[Commons Operator]. + +[source,yaml] +---- +--- +apiVersion: authentication.stackable.tech/v1alpha1 +kind: AuthenticationClass +metadata: + name: kafka-client-tls # <2> +spec: + provider: + tls: + clientCertSecretClass: kafka-client-auth-secret # <3> +--- +apiVersion: secrets.stackable.tech/v1alpha1 +kind: SecretClass +metadata: + name: kafka-client-auth-secret # <4> +spec: + backend: + autoTls: + ca: + secret: + name: secret-provisioner-tls-kafka-client-ca + namespace: default + autoGenerate: true +--- +apiVersion: kafka.stackable.tech/v1alpha1 +kind: KafkaCluster +metadata: + name: simple-kafka +spec: + image: + productVersion: 3.3.1 + stackableVersion: "23.4.0-rc2" + clusterConfig: + authentication: + - authenticationClass: kafka-client-tls # <1> + zookeeperConfigMapName: simple-kafka-znode + brokers: + roleGroups: + default: + replicas: 3 +---- +<1> The `clusterConfig.authentication.authenticationClass` can be set to use TLS for authentication. This is optional. +<2> The referenced `AuthenticationClass` that references a `SecretClass` to provide certificates. +<3> The reference to a `SecretClass`. +<4> The `SecretClass` that is referenced by the `AuthenticationClass` in order to provide certificates. + + +== [[authorization]]Authorization + +If you wish to include integration with xref:opa::index.adoc[Open Policy Agent] and already have an OPA cluster, then you can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package. The package is optional and will default to the `metadata.name` field: + +[source,yaml] +---- +--- +apiVersion: kafka.stackable.tech/v1alpha1 +kind: KafkaCluster +metadata: + name: simple-kafka +spec: + image: + productVersion: 3.3.1 + stackableVersion: "23.4.0-rc2" + clusterConfig: + authorization: + opa: + configMapName: simple-opa + package: kafka + zookeeperConfigMapName: simple-kafka-znode + brokers: + roleGroups: + default: + replicas: 1 +---- + +You can change some opa cache properties by overriding: + +[source,yaml] +---- +--- +apiVersion: kafka.stackable.tech/v1alpha1 +kind: KafkaCluster +metadata: + name: simple-kafka +spec: + image: + productVersion: 3.3.1 + stackableVersion: "23.4.0-rc2" + clusterConfig: + authorization: + opa: + configMapName: simple-opa + package: kafka + zookeeperConfigMapName: simple-kafka-znode + brokers: + configOverrides: + server.properties: + opa.authorizer.cache.initial.capacity: "100" + opa.authorizer.cache.maximum.size: "100" + opa.authorizer.cache.expire.after.seconds: "10" + roleGroups: + default: + replicas: 1 +---- + +A full list of settings and their respective defaults can be found https://github.com/anderseknert/opa-kafka-plugin[here]. diff --git a/docs/modules/kafka/pages/usage-guide/storage-resources.adoc b/docs/modules/kafka/pages/usage-guide/storage-resources.adoc new file mode 100644 index 00000000..d215dd5f --- /dev/null +++ b/docs/modules/kafka/pages/usage-guide/storage-resources.adoc @@ -0,0 +1,44 @@ += Storage and resource configuration + +== Storage for data volumes + +You can mount volumes where data is stored by specifying https://kubernetes.io/docs/concepts/storage/persistent-volumes[PersistentVolumeClaims] for each individual role group: + +[source,yaml] +---- +brokers: + roleGroups: + default: + config: + resources: + storage: + data: + capacity: 2Gi +---- + +In the above example, all Kafka brokers in the default group will store data (the location of the property `log.dirs`) on a `2Gi` volume. + +By default, in case nothing is configured in the custom resource for a certain role group, each Pod will have a `1Gi` large local volume mount for the data location. + +== Resource Requests + +include::home:concepts:stackable_resource_requests.adoc[] + +If no resource requests are configured explicitly, the Kafka operator uses the following defaults: + +[source,yaml] +---- +brokers: + roleGroups: + default: + config: + resources: + memory: + limit: '2Gi' + cpu: + min: '500m' + max: '4' + storage: + log_dirs: + capacity: 1Gi +---- diff --git a/docs/modules/kafka/pages/usage.adoc b/docs/modules/kafka/pages/usage.adoc deleted file mode 100644 index a1e33987..00000000 --- a/docs/modules/kafka/pages/usage.adoc +++ /dev/null @@ -1,348 +0,0 @@ -= Usage - -If you are not installing the operator using Helm then after installation the CRD for this operator must be created: - - kubectl apply -f /etc/stackable/kafka-operator/crd/kafkacluster.crd.yaml - -To create an Apache Kafka cluster named `simple-kafka` assuming that you already have a Zookeeper cluster named `simple-zk`: - -[source,yaml] ----- ---- -apiVersion: zookeeper.stackable.tech/v1alpha1 -kind: ZookeeperZnode -metadata: - name: simple-kafka-znode -spec: - clusterRef: - name: simple-zk - namespace: default ---- -apiVersion: kafka.stackable.tech/v1alpha1 -kind: KafkaCluster -metadata: - name: simple-kafka -spec: - image: - productVersion: 3.3.1 - stackableVersion: "23.4.0-rc2" - clusterConfig: - zookeeperConfigMapName: simple-kafka-znode - brokers: - roleGroups: - default: - replicas: 1 ----- -If you wish to include integration with xref:opa::index.adoc[Open Policy Agent] and already have an OPA cluster, then you can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package. The package is optional and will default to the `metadata.name` field: - -[source,yaml] ----- ---- -apiVersion: kafka.stackable.tech/v1alpha1 -kind: KafkaCluster -metadata: - name: simple-kafka -spec: - image: - productVersion: 3.3.1 - stackableVersion: "23.4.0-rc2" - clusterConfig: - authorization: - opa: - configMapName: simple-opa - package: kafka - zookeeperConfigMapName: simple-kafka-znode - brokers: - roleGroups: - default: - replicas: 1 ----- - -You can change some opa cache properties by overriding: - -[source,yaml] ----- ---- -apiVersion: kafka.stackable.tech/v1alpha1 -kind: KafkaCluster -metadata: - name: simple-kafka -spec: - image: - productVersion: 3.3.1 - stackableVersion: "23.4.0-rc2" - clusterConfig: - authorization: - opa: - configMapName: simple-opa - package: kafka - zookeeperConfigMapName: simple-kafka-znode - brokers: - configOverrides: - server.properties: - opa.authorizer.cache.initial.capacity: "100" - opa.authorizer.cache.maximum.size: "100" - opa.authorizer.cache.expire.after.seconds: "10" - roleGroups: - default: - replicas: 1 ----- - -A full list of settings and their respective defaults can be found https://github.com/anderseknert/opa-kafka-plugin[here]. - -== Monitoring - -The managed Kafka instances are automatically configured to export Prometheus metrics. See -xref:home:operators:monitoring.adoc[] for more details. - -== Log aggregation - -The logs can be forwarded to a Vector log aggregator by providing a discovery -ConfigMap for the aggregator and by enabling the log agent: - -[source,yaml] ----- -spec: - clusterConfig: - vectorAggregatorConfigMapName: vector-aggregator-discovery - brokers: - config: - logging: - enableVectorAgent: true ----- - -Further information on how to configure logging, can be found in -xref:home:concepts:logging.adoc[]. - -== Encryption - -The internal and client communication can be encrypted TLS. This requires the xref:secret-operator::index.adoc[Secret Operator] to be present in order to provide certificates. The utilized certificates can be changed in a top-level config. - -[source,yaml] ----- ---- -apiVersion: kafka.stackable.tech/v1alpha1 -kind: KafkaCluster -metadata: - name: simple-kafka -spec: - image: - productVersion: 3.3.1 - stackableVersion: "23.4.0-rc2" - clusterConfig: - zookeeperConfigMapName: simple-kafka-znode - tls: - serverSecretClass: tls # <1> - internalSecretClass: kafka-internal-tls # <2> - brokers: - roleGroups: - default: - replicas: 3 ----- -<1> The `spec.clusterConfig.tls.serverSecretClass` refers to the client-to-server encryption. Defaults to the `tls` secret. Can be deactivated by setting `serverSecretClass` to `null`. -<2> The `spec.clusterConfig.tls.internalSecretClass` refers to the broker-to-broker internal encryption. This must be explicitly set or defaults to `tls`. May be disabled by setting `internalSecretClass` to `null`. - -The `tls` secret is deployed from the xref:secret-operator::index.adoc[Secret Operator] and looks like this: - -[source,yaml] ----- ---- -apiVersion: secrets.stackable.tech/v1alpha1 -kind: SecretClass -metadata: - name: tls -spec: - backend: - autoTls: - ca: - secret: - name: secret-provisioner-tls-ca - namespace: default - autoGenerate: true ----- - -You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or `spec.clusterConfig.tls.internalSecretClass` to use different certificates. - -== Authentication - -The internal or broker-to-broker communication is authenticated via TLS. In order to enforce TLS authentication for client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the xref:commons-operator::index.adoc[Commons Operator]. - -[source,yaml] ----- ---- -apiVersion: authentication.stackable.tech/v1alpha1 -kind: AuthenticationClass -metadata: - name: kafka-client-tls # <2> -spec: - provider: - tls: - clientCertSecretClass: kafka-client-auth-secret # <3> ---- -apiVersion: secrets.stackable.tech/v1alpha1 -kind: SecretClass -metadata: - name: kafka-client-auth-secret # <4> -spec: - backend: - autoTls: - ca: - secret: - name: secret-provisioner-tls-kafka-client-ca - namespace: default - autoGenerate: true ---- -apiVersion: kafka.stackable.tech/v1alpha1 -kind: KafkaCluster -metadata: - name: simple-kafka -spec: - image: - productVersion: 3.3.1 - stackableVersion: "23.4.0-rc2" - clusterConfig: - authentication: - - authenticationClass: kafka-client-tls # <1> - zookeeperConfigMapName: simple-kafka-znode - brokers: - roleGroups: - default: - replicas: 3 ----- -<1> The `clusterConfig.authentication.authenticationClass` can be set to use TLS for authentication. This is optional. -<2> The referenced `AuthenticationClass` that references a `SecretClass` to provide certificates. -<3> The reference to a `SecretClass`. -<4> The `SecretClass` that is referenced by the `AuthenticationClass` in order to provide certificates. - -== Configuration & Environment Overrides - -The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role). - -IMPORTANT: Overriding certain properties which are set by operator (such as the ports) can interfere with the operator and can lead to problems. - -=== Configuration Properties - -For a role or role group, at the same level of `config`, you can specify: `configOverrides` for the `server.properties`. For example, if you want to set the `auto.create.topics.enable` to disable automatic topic creation, it can be configured in the `KafkaCluster` resource like so: - -[source,yaml] ----- -brokers: - roleGroups: - default: - configOverrides: - server.properties: - auto.create.topics.enable: "false" - replicas: 1 ----- - -Just as for the `config`, it is possible to specify this at role level as well: - -[source,yaml] ----- -brokers: - configOverrides: - server.properties: - auto.create.topics.enable: "false" - roleGroups: - default: - replicas: 1 ----- - -All override property values must be strings. - -For a full list of configuration options we refer to the Apache Kafka https://kafka.apache.org/documentation/#configuration[Configuration Reference]. - -=== Environment Variables - -In a similar fashion, environment variables can be (over)written. For example per role group: - -[source,yaml] ----- -servers: - roleGroups: - default: - envOverrides: - MY_ENV_VAR: "MY_VALUE" - replicas: 1 ----- - -or per role: - -[source,yaml] ----- -servers: - envOverrides: - MY_ENV_VAR: "MY_VALUE" - roleGroups: - default: - replicas: 1 ----- - -=== Storage for data volumes - -You can mount volumes where data is stored by specifying https://kubernetes.io/docs/concepts/storage/persistent-volumes[PersistentVolumeClaims] for each individual role group: - -[source,yaml] ----- -brokers: - roleGroups: - default: - config: - resources: - storage: - data: - capacity: 2Gi ----- - -In the above example, all Kafka brokers in the default group will store data (the location of the property `log.dirs`) on a `2Gi` volume. - -By default, in case nothing is configured in the custom resource for a certain role group, each Pod will have a `1Gi` large local volume mount for the data location. - -=== Resource Requests - -// The "nightly" version is needed because the "include" directive searches for -// files in the "stable" version by default. -// TODO: remove the "nightly" version after the next platform release (current: 22.09) -include::nightly@home:concepts:stackable_resource_requests.adoc[] - -If no resource requests are configured explicitly, the Kafka operator uses the following defaults: - -[source,yaml] ----- -brokers: - roleGroups: - default: - config: - resources: - memory: - limit: '2Gi' - cpu: - min: '500m' - max: '4' - storage: - log_dirs: - capacity: 1Gi ----- - -=== Pod Placement - -You can configure Pod placement for Kafka brokers as described in xref:concepts:pod_placement.adoc[]. - -By default, the operator configures the following Pod placement constraints: - -[source,yaml] ----- -affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - podAffinityTerm: - labelSelector: - matchLabels: - app.kubernetes.io/component: broker - app.kubernetes.io/instance: cluster-name - app.kubernetes.io/name: kafka - topologyKey: kubernetes.io/hostname - weight: 70 ----- - -In the example above `cluster-name` is the name of the Kafka custom resource that owns this Pod. diff --git a/docs/modules/kafka/partials/nav.adoc b/docs/modules/kafka/partials/nav.adoc index c71dd8d1..db070e7a 100644 --- a/docs/modules/kafka/partials/nav.adoc +++ b/docs/modules/kafka/partials/nav.adoc @@ -1,8 +1,12 @@ * xref:kafka:getting_started/index.adoc[] ** xref:kafka:getting_started/installation.adoc[] ** xref:kafka:getting_started/first_steps.adoc[] -* xref:kafka:dependencies.adoc[] +* xref:kafka:usage-guide/index.adoc[] +** xref:kafka:usage-guide/pod-placement.adoc[] +** xref:kafka:usage-guide/storage-resources.adoc[] +** xref:kafka:usage-guide/security.adoc[] +** xref:kafka:usage-guide/discovery.adoc[] +** xref:kafka:usage-guide/monitoring.adoc[] +** xref:kafka:usage-guide/logging.adoc[] +** xref:kafka:usage-guide/configuration-environment-overrides.adoc[] * xref:kafka:configuration.adoc[] -* xref:kafka:usage.adoc[] -* Concepts -** xref:kafka:discovery.adoc[] From e9f4b2b95539efaa3244e2e43e144ad6883d0e97 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Tue, 28 Mar 2023 17:42:58 +0200 Subject: [PATCH 02/11] Added diagram --- .../kafka/images/kafka_overview.drawio.svg | 4 +++ docs/modules/kafka/pages/index.adoc | 35 ++++++++++++++++--- .../kafka/pages/usage-guide/discovery.adoc | 10 +++--- 3 files changed, 39 insertions(+), 10 deletions(-) create mode 100644 docs/modules/kafka/images/kafka_overview.drawio.svg diff --git a/docs/modules/kafka/images/kafka_overview.drawio.svg b/docs/modules/kafka/images/kafka_overview.drawio.svg new file mode 100644 index 00000000..77b4d323 --- /dev/null +++ b/docs/modules/kafka/images/kafka_overview.drawio.svg @@ -0,0 +1,4 @@ + + + +
Pod
<name>-broker-<rg1>-1
Pod...
Kafka
Operator
Kafka...
Pod
<name>-broker-<rg1>-0
Pod...
ConfigMap
<name>-broker-<rg1>
ConfigMap...
KafkaCluster
<name>
KafkaCluster...
create
create
read
read
Legend
Legend
Operator
Operator
Resource
Resource
Custom
Resource
Custom...
role group
<rg1>
role group...
Service
<name>-broker-<rg2>
Service...
Pod
<name>-broker-<rg2>-0
Pod...
Service
<name>
Service...
role
broker
role...
references
references
role group
<rg2>
role group...
Service
<name>-broker-<rg1>-1
Service...
Service
<name>-broker-<rg1>-0
Service...
Service
<name>-broker-<rg2>-0
Service...
StatefulSet
<name>-broker-<rg2>
StatefulSet...
ConfigMap
<name>-broker-<rg2>
ConfigMap...
StatefulSet
<name>-broker-<rg1>
StatefulSet...
Service
<name>-broker-<rg1>
Service...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 5a5e1ff1..7b5a80f4 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -1,19 +1,44 @@ = Stackable Operator for Apache Kafka +:description: The Stackable Operator for Apache Superset is a Kubernetes operator that can manage Apache Kafka clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Kafka versions. +:keywords: Stackable Operator, Apache Kafka, Kubernetes, operator, SQL, engineer, broker, big data, CRD, StatefulSet, ConfigMap, Service, Druid, ZooKeeper, NiFi, S3, demo, version This is an operator for Kubernetes that can manage https://kafka.apache.org/[Apache Kafka] clusters. -WARNING: This operator only works with images from the https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fkafka[Stackable] repository +== Getting started -= Dependencies +xref:kafka:getting_started/index.adoc[] -== ZooKeeper +== Resources -Kafka currently requires ZooKeeper for coordination purposes. +The _KafkaCluster_ custom resource defines all your Kafka cluster configuration. It defines a single `broker` xref:concepts:roles-and-role-groups.adoc[role]. + +image::kafka_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the operator.] + +For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet and a ConfigMap to be used by the Pods of the StatefulSet. Multiple Services are created, one at role level, one per role group as well as one for every individual Pod. + +== Dependencies + +Kafka currently requires Apache ZooKeeper for coordination purposes. NOTE: This will change https://cwiki.apache.org/confluence/display/KAFKA/KIP-500[in the future]. -Which means a reference to an existing ZooKeeper ensemble must be provided. +== Connections to other products + +Input from NiFi + +Output to Druid + +== [[demos]]Demos + +xref:stackablectl::index.adoc[] supports installing xref:stackablectl::demos/index.adoc[] with a single command. The demos are complete data piplines which showcase multiple components of the Stackable platform working together and which you can try out interactively. Both demos below include inject data into Kafka using NiFi and use Druid as a destination for the Kafka topics. + +=== Waterlevel Demo + +The xref:stackablectl::demos/nifi-kafka-druid-water-level-data.adoc[] demo uses data from https://www.pegelonline.wsv.de/webservice/ueberblick[PEGELONLINE] to visualize water levels in rivers and coastal regions of Germany from historic and real time data. + +=== Earthquake Demo +The xref:stackablectl::demos/nifi-kafka-druid-earthquake-data.adoc[] demo ingests https://earthquake.usgs.gov/[earthquake data] into a similar pipeline as is used in the waterlevel demo. == Supported Versions diff --git a/docs/modules/kafka/pages/usage-guide/discovery.adoc b/docs/modules/kafka/pages/usage-guide/discovery.adoc index 23224f22..eb30da5b 100644 --- a/docs/modules/kafka/pages/usage-guide/discovery.adoc +++ b/docs/modules/kafka/pages/usage-guide/discovery.adoc @@ -5,7 +5,7 @@ = Discovery :page-aliases: discovery.adoc -The Stackable Operator for Apache Kafka publishes a discovery https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmap-v1-core[`ConfigMap`], which exposes a client configuration bundle that allows access to the Apache Kafka cluster. +The Stackable Operator for Apache Kafka publishes a xref:concepts:service_discovery.adoc[service discovery ConfigMap] which exposes a client configuration bundle that allows access to the Apache Kafka cluster. The bundle includes a thrift connection string to access the Kafka broker service. This string may be used by other operators or tools to configure their products with access to Kafka. This is limited to internal cluster access. @@ -23,14 +23,14 @@ metadata: spec: [...] ---- -<1> The name of the Kafka cluster, which is also the name of the created discovery `ConfigMap`. -<2> The namespace of the discovery `ConfigMap`. +<1> The name of the Kafka cluster, which is also the name of the created discovery ConfigMap. +<2> The namespace of the discovery ConfigMap. -The resulting discovery `ConfigMap` is `{namespace}/{clusterName}`. +The resulting discovery ConfigMap is `{namespace}/{clusterName}`. == Contents -The `{namespace}/{clusterName}` discovery `ConfigMap` contains the following fields where `{clusterName}` represents the name and `{namespace}` the namespace of the cluster: +The `{namespace}/{clusterName}` discovery ConfigMap contains the following fields where `{clusterName}` represents the name and `{namespace}` the namespace of the cluster: `KAFKA`:: ==== From 03a85fb0d9d2c8202bf614ad9e0f8b52af4003bf Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Wed, 29 Mar 2023 14:19:18 +0200 Subject: [PATCH 03/11] some update --- docs/modules/kafka/images/kafka_overview.drawio.svg | 2 +- docs/modules/kafka/pages/index.adoc | 11 ++++++++--- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/docs/modules/kafka/images/kafka_overview.drawio.svg b/docs/modules/kafka/images/kafka_overview.drawio.svg index 77b4d323..ee495307 100644 --- a/docs/modules/kafka/images/kafka_overview.drawio.svg +++ b/docs/modules/kafka/images/kafka_overview.drawio.svg @@ -1,4 +1,4 @@ -
Pod
<name>-broker-<rg1>-1
Pod...
Kafka
Operator
Kafka...
Pod
<name>-broker-<rg1>-0
Pod...
ConfigMap
<name>-broker-<rg1>
ConfigMap...
KafkaCluster
<name>
KafkaCluster...
create
create
read
read
Legend
Legend
Operator
Operator
Resource
Resource
Custom
Resource
Custom...
role group
<rg1>
role group...
Service
<name>-broker-<rg2>
Service...
Pod
<name>-broker-<rg2>-0
Pod...
Service
<name>
Service...
role
broker
role...
references
references
role group
<rg2>
role group...
Service
<name>-broker-<rg1>-1
Service...
Service
<name>-broker-<rg1>-0
Service...
Service
<name>-broker-<rg2>-0
Service...
StatefulSet
<name>-broker-<rg2>
StatefulSet...
ConfigMap
<name>-broker-<rg2>
ConfigMap...
StatefulSet
<name>-broker-<rg1>
StatefulSet...
Service
<name>-broker-<rg1>
Service...
Text is not SVG - cannot display
\ No newline at end of file +
Pod
<name>-broker-<rg1>-1
Pod...
Kafka
Operator
Kafka...
Pod
<name>-broker-<rg1>-0
Pod...
ConfigMap
<name>-broker-<rg1>
ConfigMap...
KafkaCluster
<name>
KafkaCluster...
create
create
read
read
Legend
Legend
Operator
Operator
Resource
Resource
Custom
Resource
Custom...
role group
<rg1>
role group...
Service
<name>-broker-<rg2>
Service...
Pod
<name>-broker-<rg2>-0
Pod...
Service
<name>
Service...
role
broker
role...
references
references
role group
<rg2>
role group...
Service
<name>-broker-<rg1>-1
Service...
Service
<name>-broker-<rg1>-0
Service...
Service
<name>-broker-<rg2>-0
Service...
StatefulSet
<name>-broker-<rg2>
StatefulSet...
ConfigMap
<name>-broker-<rg2>
ConfigMap...
StatefulSet
<name>-broker-<rg1>
StatefulSet...
Service
<name>-broker-<rg1>
Service...
ConfigMap
<name>
ConfigMap...
discovery ConfigMap
discovery ConfigMap
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 7b5a80f4..f2b01552 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -2,11 +2,14 @@ :description: The Stackable Operator for Apache Superset is a Kubernetes operator that can manage Apache Kafka clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Kafka versions. :keywords: Stackable Operator, Apache Kafka, Kubernetes, operator, SQL, engineer, broker, big data, CRD, StatefulSet, ConfigMap, Service, Druid, ZooKeeper, NiFi, S3, demo, version -This is an operator for Kubernetes that can manage https://kafka.apache.org/[Apache Kafka] clusters. +The Stackable Operator for Apache Kafka is an operator that can deploy and manage https://kafka.apache.org/[Apache Kafka] clusters on Kubernetes. +// what is Kafka? +Kafka is a distributed event streaming platform ... TODO + == Getting started -xref:kafka:getting_started/index.adoc[] +Follow the xref:kafka:getting_started/index.adoc[] which will guide you through installing The Stackable Kafka and ZooKeeper Operators, setting up ZooKeeper and Kafka and testing your Kafka using kcat. == Resources @@ -14,7 +17,9 @@ The _KafkaCluster_ custom resource defines all your Kafka cluster configuration. image::kafka_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the operator.] -For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet and a ConfigMap to be used by the Pods of the StatefulSet. Multiple Services are created, one at role level, one per role group as well as one for every individual Pod. +For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet. Multiple Services are created, one at role level, one per role group as well as one for every individual Pod, to allow accessing the whole Kafka cluster, parts of it or even individual brokers. + +For every StatefulSet (role group) a ConfigMap is deployed containing a `log4j.properties` file for xref:usage-guide/logging.adoc[logging] configuration and a `server.properties` file containing the whole Kafka configuration which is derived from the KafkaCluster resource. == Dependencies From 1e4440d37865fb27d8f1c2ef7f630f6ef06a2ec7 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 10:55:34 +0200 Subject: [PATCH 04/11] some text changes --- docs/modules/kafka/pages/index.adoc | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index f2b01552..4093632b 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -4,8 +4,7 @@ The Stackable Operator for Apache Kafka is an operator that can deploy and manage https://kafka.apache.org/[Apache Kafka] clusters on Kubernetes. // what is Kafka? -Kafka is a distributed event streaming platform ... TODO - +Apache Kafka is a distributed streaming platform designed to handle large volumes of data in real-time. It is commonly used for real-time data processing, data ingestion, event streaming, and messaging between applications. == Getting started @@ -21,21 +20,19 @@ For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in t For every StatefulSet (role group) a ConfigMap is deployed containing a `log4j.properties` file for xref:usage-guide/logging.adoc[logging] configuration and a `server.properties` file containing the whole Kafka configuration which is derived from the KafkaCluster resource. -== Dependencies +The Operator creates a xref:concepts:service_discovery.adoc[] for the whole KafkaCluster which references the Service for the whole cluster. Other operators use this ConfigMap to connect to a Kafka cluster simply by name and it can also be used by custom third party applications to find the endpoint to connect to the kafka. -Kafka currently requires Apache ZooKeeper for coordination purposes. +== Dependencies -NOTE: This will change https://cwiki.apache.org/confluence/display/KAFKA/KIP-500[in the future]. +Kafka requires xref:zookeeper:index.adoc[Apache ZooKeeper] for coordination purposes (Although it will not be needed in the future as it will be replaced with a https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum[built-in solution]). == Connections to other products -Input from NiFi - -Output to Druid +Since Kafka often takes on a bridging role, many other products connect to it. In the <> below you will find example data pipelines that use xref:nifi:index.adoc[Apache NiFi with the Stackable Operator] to write to Kafka and xref:nifi:index.adoc[Apache Druid with the Stackable Operator] to read from Kafka. But you can also connect xref:spark-k8s:index.adoc[Apache Spark] or custom Jobs written in various languages to it. == [[demos]]Demos -xref:stackablectl::index.adoc[] supports installing xref:stackablectl::demos/index.adoc[] with a single command. The demos are complete data piplines which showcase multiple components of the Stackable platform working together and which you can try out interactively. Both demos below include inject data into Kafka using NiFi and use Druid as a destination for the Kafka topics. +xref:stackablectl::index.adoc[] supports installing xref:stackablectl::demos/index.adoc[] with a single command. The demos are complete data piplines which showcase multiple components of the Stackable platform working together and which you can try out interactively. Both demos below inject data into Kafka using NiFi and read from the Kafka topics using Druid. === Waterlevel Demo From 822bb39a3c0480554bf479e48ff4ba3885dbdbb6 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 11:09:07 +0200 Subject: [PATCH 05/11] updated changelog --- CHANGELOG.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 678931af..47095b16 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,10 +14,12 @@ All notable changes to this project will be documented in this file. - operator-rs: 0.30.1 -> 0.33.0 ([#545]). - Bumped stackable versions to "23.4.0-rc1" ([#545]). - Bumped kafka stackable versions to "23.4.0-rc2" ([#547]). +- Updated landing page and restructured usage guide ([#573]). [#545]: https://github.com/stackabletech/kafka-operator/pull/545 [#547]: https://github.com/stackabletech/kafka-operator/pull/547 [#557]: https://github.com/stackabletech/kafka-operator/pull/557 +[#573]: https://github.com/stackabletech/kafka-operator/pull/573 ## [23.1.0] - 2023-01-23 From 6f917604585897e7ab70066b3cff3e3c26d625f2 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:51:48 +0200 Subject: [PATCH 06/11] Update docs/modules/kafka/pages/index.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/index.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 4093632b..204d90da 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -12,7 +12,7 @@ Follow the xref:kafka:getting_started/index.adoc[] which will guide you through == Resources -The _KafkaCluster_ custom resource defines all your Kafka cluster configuration. It defines a single `broker` xref:concepts:roles-and-role-groups.adoc[role]. +The _KafkaCluster_ custom resource contains your Kafka cluster configuration. It defines a single `broker` xref:concepts:roles-and-role-groups.adoc[role]. image::kafka_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the operator.] From 92309fb6a772042c940cd071b3bc6f1fb650fb8b Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:52:10 +0200 Subject: [PATCH 07/11] Update docs/modules/kafka/pages/index.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/index.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 204d90da..e7020237 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -16,7 +16,7 @@ The _KafkaCluster_ custom resource contains your Kafka cluster configuration. It image::kafka_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the operator.] -For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet. Multiple Services are created, one at role level, one per role group as well as one for every individual Pod, to allow accessing the whole Kafka cluster, parts of it or even individual brokers. +For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in the `broker` role the Operator creates a StatefulSet. Multiple Services are created - one at role level, one per role group as well as one for every individual Pod - to allow access to the entire Kafka cluster, parts of it or just individual brokers. For every StatefulSet (role group) a ConfigMap is deployed containing a `log4j.properties` file for xref:usage-guide/logging.adoc[logging] configuration and a `server.properties` file containing the whole Kafka configuration which is derived from the KafkaCluster resource. From 8437547d0782878731ba1edc329966a483eb18fc Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:56:08 +0200 Subject: [PATCH 08/11] Update docs/modules/kafka/pages/index.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/index.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index e7020237..1cb05d2e 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -20,7 +20,7 @@ For every xref:concepts:roles-and-role-groups.adoc#_role_groups[role group] in t For every StatefulSet (role group) a ConfigMap is deployed containing a `log4j.properties` file for xref:usage-guide/logging.adoc[logging] configuration and a `server.properties` file containing the whole Kafka configuration which is derived from the KafkaCluster resource. -The Operator creates a xref:concepts:service_discovery.adoc[] for the whole KafkaCluster which references the Service for the whole cluster. Other operators use this ConfigMap to connect to a Kafka cluster simply by name and it can also be used by custom third party applications to find the endpoint to connect to the kafka. +The Operator creates a xref:concepts:service_discovery.adoc[] for the whole KafkaCluster which references the Service for the whole cluster. Other operators use this ConfigMap to connect to a Kafka cluster simply by name and it can also be used by custom third party applications to find the connection endpoint. == Dependencies From 2278ffe63e580cb41ee64e0a3e1cb4680bd4b7ff Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:56:20 +0200 Subject: [PATCH 09/11] Update docs/modules/kafka/pages/index.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/index.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 1cb05d2e..17aa6a83 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -24,7 +24,7 @@ The Operator creates a xref:concepts:service_discovery.adoc[] for the whole Kafk == Dependencies -Kafka requires xref:zookeeper:index.adoc[Apache ZooKeeper] for coordination purposes (Although it will not be needed in the future as it will be replaced with a https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum[built-in solution]). +Kafka requires xref:zookeeper:index.adoc[Apache ZooKeeper] for coordination purposes (it will not be needed in the future as it will be replaced with a https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum[built-in solution]). == Connections to other products From 2dc8d8a157ba3c898cdd6fb8a9e006cd53bcd6a5 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:58:32 +0200 Subject: [PATCH 10/11] Update docs/modules/kafka/pages/index.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/index.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/index.adoc b/docs/modules/kafka/pages/index.adoc index 17aa6a83..6693b701 100644 --- a/docs/modules/kafka/pages/index.adoc +++ b/docs/modules/kafka/pages/index.adoc @@ -28,7 +28,7 @@ Kafka requires xref:zookeeper:index.adoc[Apache ZooKeeper] for coordination purp == Connections to other products -Since Kafka often takes on a bridging role, many other products connect to it. In the <> below you will find example data pipelines that use xref:nifi:index.adoc[Apache NiFi with the Stackable Operator] to write to Kafka and xref:nifi:index.adoc[Apache Druid with the Stackable Operator] to read from Kafka. But you can also connect xref:spark-k8s:index.adoc[Apache Spark] or custom Jobs written in various languages to it. +Since Kafka often takes on a bridging role, many other products connect to it. In the <> below you will find example data pipelines that use xref:nifi:index.adoc[Apache NiFi with the Stackable Operator] to write to Kafka and xref:nifi:index.adoc[Apache Druid with the Stackable Operator] to read from Kafka. But you can also connect using xref:spark-k8s:index.adoc[Apache Spark] or with a custom Job written in various languages. == [[demos]]Demos From 403dcc004401f267613d863aaadf75dd3deea476 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 30 Mar 2023 14:58:54 +0200 Subject: [PATCH 11/11] Update docs/modules/kafka/pages/usage-guide/storage-resources.adoc Co-authored-by: Andrew Kenworthy --- docs/modules/kafka/pages/usage-guide/storage-resources.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/modules/kafka/pages/usage-guide/storage-resources.adoc b/docs/modules/kafka/pages/usage-guide/storage-resources.adoc index d215dd5f..81da24b6 100644 --- a/docs/modules/kafka/pages/usage-guide/storage-resources.adoc +++ b/docs/modules/kafka/pages/usage-guide/storage-resources.adoc @@ -18,7 +18,7 @@ brokers: In the above example, all Kafka brokers in the default group will store data (the location of the property `log.dirs`) on a `2Gi` volume. -By default, in case nothing is configured in the custom resource for a certain role group, each Pod will have a `1Gi` large local volume mount for the data location. +If nothing is configured in the custom resource for a certain role group, then by default each Pod will have a `1Gi` large local volume mount for the data location. == Resource Requests