Introduction to Apache Kafka

3rd Edition: Managing Your Applications for Confluent Certification (Teachable Lessons and Projects) Use cases for Kafka 1st edition: Managing your applications for Confluent Certification (Teachable Lessons and Projects) Use cases for Kafka Getting Started with Apache Kafka: CCDAK Confluent Certified Developer for Apache Kafka (CCDAK) Kafka is an open-source, distributed, durable and highly available publish-subscribe messaging bus for deploying streaming data processing. It runs on Apache and can be deployed into clusters on commodity hardware or within virtual machines. Confluent helps developers to learn and get started quickly with Kafka and other streaming technologies.

Confluent Certified Developer for Apache Kafka (CCDAK)

The Confluent Certified Developer (CCD) is a 16-hour program designed to help you understand, architect, design, deploy and operate streaming applications on Apache Kafka. By completing this training, you will be able to confidently create crateful0 stream processing applications that can persist and propagate a new state across multiple Kafka clusters.  This course is ideal for experienced streaming application developers, system administrators, and business analysts who are comfortable working with relational database systems.  Join this training and become certified in streaming with Kafka by taking a course from the creators of the Confluent Platform and serving as a guide on our team!  Note: This training is for development only. You are not certified to work on production clusters.

Select the Kafka Streams joins that are always windowed joins.


A. KStream-KTable join
B. KStream-GlobalKTable
C. KStream-KStream join
D. KTable-KTable join

You are sending messages with keys to a topic. To increase throughput, you decide to increase the number of partitions of the topic. Select all that apply.


A. New records may get written to a different partition
B. All the existing records will get rebalanced among the partitions to balance the load
C. New records with the same key will get written to the partition where old records with that key were 
written
D. Old records will stay in their partitions

To read data from a topic, the following configuration is needed for the consumers


A. the list of brokers that have the data, the topic name, and the partitions list
B. any broker, and the list of topic partitions
C. any broker to connect to, and the topic name