Distributed Image Processing in Cloud Dataproc
In this hands-on lab, you will learn how to use Apache Spark on Cloud Dataproc to distribute a computationally intensive image processing task onto a cluster of machines. This lab is part of a series of labs on processing scientific data.
What you'll learn
How to create a managed Cloud Dataproc cluster with Apache Spark pre-installed.
How to build and run jobs that use external packages that aren't already installed on your cluster.
How to shut down your cluster.
This is an advanced level lab. Familiarity with Cloud Dataproc and Apache Spark is recommended, but not required. If you're looking to get up to speed in these services, be sure to check out the following labs:
- Dataproc: Qwik Start - Command Line
- Dataproc: Qwik Start - Console
- Introduction to Cloud Dataproc: Hadoop and Spark on Google Cloud Platform
Once you're ready, scroll down to learn more about the services that you'll be using in this lab.
이 실습의 나머지 부분과 기타 사항에 대해 알아보려면 Qwiklabs에 가입하세요.
- Google Cloud Console에 대한 임시 액세스 권한을 얻습니다.
- 초급부터 고급 수준까지 200여 개의 실습이 준비되어 있습니다.
- 자신의 학습 속도에 맞춰 학습할 수 있도록 적은 분량으로 나누어져 있습니다.
Create a development machine in Compute Engine
Install Software in the development machine
Create a GCS bucket
Download some sample images into your bucket
Create a Cloud Dataproc cluster
Submit your job to Cloud Dataproc