“SASL-SSL”

https://github.com/confluentinc/confluent-kafka-go/issues/411

Deploy do zeebe connector utilizando Helm Charts:

# Save this file as "helm-values-kafka-connect.yaml"
# Run:
#    git clone https://github.com/confluentinc/cp-helm-charts.git
#    helm install -f helm-values-kafka-connect.yaml --name kafka cp-helm-charts
 
## ------------------------------------------------------
## REST Proxy
## ------------------------------------------------------
cp-kafka-rest:
enabled: true
image: confluentinc/cp-kafka-rest
imageTag: 5.3.1
heapOptions: "-Xms512M -Xmx512M"
resources: {}
 
## ------------------------------------------------------
## Kafka Connect
## ------------------------------------------------------
cp-kafka-connect:
enabled: true
 
#Custom docker image of confluent connector + zeebe connector .jar embedded
image: berndruecker/kafka-connect-zeebe
 
imageTag: latest
imagePullPolicy: Always
heapOptions: "-Xms512M -Xmx512M"
resources: {}
 
## Kafka Connect properties
## ref: https://docs.confluent.io/current/connect/userguide.html#configuring-workers
configurationOverrides:
 
    #  "A list of host/port pairs to use for establishing the initial connection to the Kafka cluster"
    "bootstrap.servers": "<host1:port1,host2:port2,...>" 
 
    "key.converter": "org.apache.kafka.connect.storage.StringConverter"
    "value.converter": "org.apache.kafka.connect.storage.StringConverter"
    "key.converter.schemas.enable": "false"
    "value.converter.schemas.enable": "false"
    "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter"
    "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter"

Para acessar o kafka de fora

Além de fazer kubectl port-forward, deve-se também, editar o yaml do kafka-headless:

I didn’t use helm for kafka but came across the same issue.Temporary solution that worked for me is adding following annotation to headless kafka service - that way brokers are able to discover each other via DNS even if not all pods are running.

metadata:
    annotations:
        service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
 

However right now I’m trying to add livenessProbe to restart kafka pod if such situation occurs - but so far I didn’t find a way how to check if Kafka has successfully joined the cluster (there is a brokerId of the broken Kafka pod in the Zookeeper so I can’t use echo dump | nc localhost 2181 | grep brokers for that)

Mensagens não são consumidas:

Caso a aplicação esteja rodando em um namespace diferente do kafka, para fazer a conexão com o kafka é necessário utilizar o dns completo:

<kafka_service>.<kafka_namespace>.svc.cluster.local:9092

Exemplo:

kafka-cp-kafka.kafka.svc.cluster.local:9092

https://medium.com/wix-engineering/troubleshooting-kafka-for-2000-microservices-at-wix-986ee382fd1e


🌱 Back to Garden