menu
arrow_back

Creating a Streaming Data Pipeline With Apache Kafka

—/100

Checkpoints

arrow_forward

Creating a Apache Kafka deployment manager

Configure the Kafka VM instance

Create topics in Kafka

Process the input data with Kafka Streams

Creating a Streaming Data Pipeline With Apache Kafka

45分 クレジット: 5

This lab was developed with our partner, Confluent. Your personal information may be shared with Confluent, the lab sponsor, if you have opted-in to receive product updates, announcements, and offers in your Account Profile.

GSP 730

Google Cloud Self-Paced Labs

Overview

In this lab, you create a streaming data pipeline with Kafka providing you a hands-on look at the Kafka Streams API. You will run a Java application that uses the Kafka Streams library by showcasing a simple end-to-end data pipeline powered by Apache Kafka®.

Objectives

In this lab, you will:

  • Start a Kafka cluster on a Compute Engine single machine

  • Write example input data to a Kafka topic, using the console producer included in Kafka

  • Process the input data with a Java application called WordCount that uses the Kafka Streams library* Inspect the output data of the application, using the console consumer included in Kafka

Qwiklabs に参加してこのラボの残りの部分や他のラボを確認しましょう。

  • Google Cloud Console への一時的なアクセス権を取得します。
  • 初心者レベルから上級者レベルまで 200 を超えるラボが用意されています。
  • ご自分のペースで学習できるように詳細に分割されています。
参加してこのラボを開始