arrow_back

Cloud Storage

Join Sign in
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Cloud Storage

Lab 1 hour 30 minutes universal_currency_alt 5 Credits show_chart Introductory
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Overview

Cloud Storage is a fundamental resource in Google Cloud, with many advanced features. In this lab, you exercise many Cloud Storage features that could be useful in your designs. You explore Cloud Storage using both the console and the gsutil tool.

Objectives

In this lab, you learn how to perform the following tasks:

  • Create and use buckets
  • Set access control lists to restrict access
  • Use your own encryption keys
  • Implement version controls
  • Use directory synchronization
  • Share a bucket across projects using IAM

Qwiklabs setup

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Task 1. Preparation

Create a Cloud Storage bucket

  1. On the Navigation menu (Navigation menu icon), click Cloud Storage > Buckets.
Note: A bucket must have a globally unique name. You could use part of your PROJECT_ID_1 in the name to help make it unique. For example, if the PROJECT_ID_1 is myproj-154920 your bucket name might be storecore154920
  1. Click Create.
  2. Specify the following, and leave the remaining settings as their defaults:
Property Value (type value or select option as specified)
Name Enter a globally unique name
Location type Region
Region
Enforce public access prevention on this bucket unchecked
Access control Fine-grained (object-level permission in addition to your bucket-level permissions)
  1. Make a note of the bucket name. It will be used later in this lab and referred to as [BUCKET_NAME_1].
  2. Click Create.

Click Check my progress to verify the objective. Create a Cloud Storage bucket

Download a sample file using CURL and make two copies

  1. In the Cloud Console, click Activate Cloud Shell (Cloud Shell).
  2. If prompted, click Continue.
  3. Store [BUCKET_NAME_1] in an environment variable:
export BUCKET_NAME_1=<enter bucket name 1 here>
  1. Verify it with echo:
echo $BUCKET_NAME_1
  1. Run the following command to download a sample file (this sample file is a publicly available Hadoop documentation HTML file):
curl \ https://hadoop.apache.org/docs/current/\ hadoop-project-dist/hadoop-common/\ ClusterSetup.html > setup.html
  1. To make copies of the file, run the following commands:
cp setup.html setup2.html cp setup.html setup3.html

Task 2. Access control lists (ACLs)

Copy the file to the bucket and configure the access control list

  1. Run the following command to copy the first file to the bucket:
gcloud storage cp setup.html gs://$BUCKET_NAME_1/
  1. To get the default access list that's been assigned to setup.html, run the following command:
gsutil acl get gs://$BUCKET_NAME_1/setup.html > acl.txt cat acl.txt
  1. To set the access list to private and verify the results, run the following commands:
gsutil acl set private gs://$BUCKET_NAME_1/setup.html gsutil acl get gs://$BUCKET_NAME_1/setup.html > acl2.txt cat acl2.txt
  1. To update the access list to make the file publicly readable, run the following commands:
gsutil acl ch -u AllUsers:R gs://$BUCKET_NAME_1/setup.html gsutil acl get gs://$BUCKET_NAME_1/setup.html > acl3.txt cat acl3.txt

Click Check my progress to verify the objective. Make file publicly readable

Examine the file in the Cloud Console

  1. In the Cloud Console, on the Navigation menu (Navigation menu icon), click Cloud Storage > Buckets.
  2. Click [BUCKET_NAME_1].
  3. Verify that for file setup.html, Public access has a Public link available.

Delete the local file and copy back from Cloud Storage

  1. Return to Cloud Shell. If necessary, click Activate Cloud Shell (Cloud Shell).
  2. Run the following command to delete the setup file:
rm setup.html
  1. To verify that the file has been deleted, run the following command:
ls
  1. To copy the file from the bucket again, run the following command:
gcloud storage cp gs://$BUCKET_NAME_1/setup.html setup.html

Task 3. Customer-supplied encryption keys (CSEK)

Generate a CSEK key

For the next step, you need an AES-256 base-64 key.

  1. Run the following command to create a key:
python3 -c 'import base64; import os; print(base64.encodebytes(os.urandom(32)))'

Result (this is example output):

b'tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=\n'
  1. Copy the value of the generated key excluding b' and \n' from the command output. Key should be in form of tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=.

Modify the boto file

The encryption controls are contained in a gsutil configuration file named .boto.

  1. To view and open the boto file, run the following commands:
ls -al nano .boto

Note: If the .boto file is empty, close the nano editor with Ctrl+X and generate a new .boto file using the gsutil config -n command. Then, try opening the file again with the above commands.

If the .boto file is still empty, you might have to locate it using the gsutil version -l command.
  1. Locate the line with "#encryption_key="
Note: The bottom of the nano editor provides you with shortcuts to quickly navigate files. Use the Where Is shortcut to quickly locate the line with the #encryption_key=.
  1. Uncomment the line by removing the # character, and paste the key you generated earlier at the end.

Example (this is an example):

Before: # encryption_key= After: encryption_key=tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=
  1. Press Ctrl+O, ENTER to save the boto file, and then press Ctrl+X to exit nano.

Upload the remaining setup files (encrypted) and verify in the Cloud Console

  1. To upload the remaining setup.html files, run the following commands:
gsutil cp setup2.html gs://$BUCKET_NAME_1/ gsutil cp setup3.html gs://$BUCKET_NAME_1/
  1. Return to the Cloud Console.
  2. Click [BUCKET_NAME_1]. Both setup2.html and setup3.html files show that they are customer-encrypted.

Click Check my progress to verify the objective. Customer-supplied encryption keys (CSEK)

Delete local files, copy new files, and verify encryption

  1. To delete your local files, run the following command in Cloud Shell:
rm setup*
  1. To copy the files from the bucket again, run the following command:
gsutil cp gs://$BUCKET_NAME_1/setup* ./
  1. To cat the encrypted files to see whether they made it back, run the following commands:
cat setup.html cat setup2.html cat setup3.html

Task 4. Rotate CSEK keys

Move the current CSEK encrypt key to decrypt key

  1. Run the following command to open the .boto file:
nano .boto
  1. Comment out the current encryption_key line by adding the # character to the beginning of the line.
Note: The bottom of the nano editor provides you with shortcuts to quickly navigate files. Use the Where Is shortcut to quickly locate the line with the encryption_key=.
  1. Uncomment decryption_key1 by removing the # character, and copy the current key from the encryption_key line to the decryption_key1 line.

Result (this is example output):

Before: encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg= # decryption_key1= After: # encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg= decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=
  1. Press Ctrl+O, ENTER to save the boto file, and then press Ctrl+X to exit nano.
Note: In practice, you would delete the old CSEK key from the encryption_key line.

Generate another CSEK key and add to the boto file

  1. Run the following command to generate a new key:
python3 -c 'import base64; import os; print(base64.encodebytes(os.urandom(32)))'
  1. Copy the value of the generated key excluding b' and \n' from the command output. Key should be in form of tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=.
  2. To open the boto file, run the following command:
nano .boto
  1. Uncomment encryption and paste the new key value for encryption_key=.

Result (this is example output):

Before: # encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg= After: encryption_key=HbFK4I8CaStcvKKIx6aNpdTse0kTsfZNUjFpM+YUEjY=
  1. Press Ctrl+O, ENTER to save the boto file, and then press Ctrl+X to exit nano.

Rewrite the key for file 1 and comment out the old decrypt key

When a file is encrypted, rewriting the file decrypts it using the decryption_key1 that you previously set, and encrypts the file with the new encryption_key.

You are rewriting the key for setup2.html, but not for setup3.html, so that you can see what happens if you don't rotate the keys properly.

  1. Run the following command:
gsutil rewrite -k gs://$BUCKET_NAME_1/setup2.html
  1. To open the boto file, run the following command:
nano .boto
  1. Comment out the current decryption_key1 line by adding the # character back in.

Result (this is example output):

Before: decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg= After: # decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=
  1. Press Ctrl+O, ENTER to save the boto file, and then press Ctrl+X to exit nano.
Note: In practice, you would delete the old CSEK key from the decryption_key1 line.

Download setup 2 and setup3

  1. To download setup2.html, run the following command:
gsutil cp gs://$BUCKET_NAME_1/setup2.html recover2.html
  1. To download setup3.html, run the following command:
gsutil cp gs://$BUCKET_NAME_1/setup3.html recover3.html Note: What happened? setup3.html was not rewritten with the new key, so it can no longer be decrypted, and the copy will fail.

You have successfully rotated the CSEK keys.

Task 5. Enable lifecycle management

View the current lifecycle policy for the bucket

  • Run the following command to view the current lifecycle policy:
gsutil lifecycle get gs://$BUCKET_NAME_1 Note: There is no lifecycle configuration. You create one in the next steps.

Create a JSON lifecycle policy file

  1. To create a file named life.json, run the following command:
nano life.json
  1. Paste the following value into the life.json file:
{ "rule": [ { "action": {"type": "Delete"}, "condition": {"age": 31} } ] } Note: These instructions tell Cloud Storage to delete the object after 31 days.
  1. Press Ctrl+O, ENTER to save the file, and then press Ctrl+X to exit nano.

Set the policy and verify

  1. To set the policy, run the following command:
gsutil lifecycle set life.json gs://$BUCKET_NAME_1
  1. To verify the policy, run the following command:
gsutil lifecycle get gs://$BUCKET_NAME_1

Click Check my progress to verify the objective. Enable lifecycle management

Task 6. Enable versioning

View the versioning status for the bucket and enable versioning

  1. Run the following command to view the current versioning status for the bucket:
gsutil versioning get gs://$BUCKET_NAME_1 Note: The Suspended policy means that it is not enabled.
  1. To enable versioning, run the following command:
gsutil versioning set on gs://$BUCKET_NAME_1
  1. To verify that versioning was enabled, run the following command:
gsutil versioning get gs://$BUCKET_NAME_1

Click Check my progress to verify the objective. Enable versioning

Create several versions of the sample file in the bucket

  1. Check the size of the sample file:
ls -al setup.html
  1. Open the setup.html file:
nano setup.html
  1. Delete any 5 lines from setup.html to change the size of the file.
  2. Press Ctrl+O, ENTER to save the file, and then press Ctrl+X to exit nano.
  3. Copy the file to the bucket with the -v versioning option:
gcloud storage cp -v setup.html gs://$BUCKET_NAME_1
  1. Open the setup.html file:
nano setup.html
  1. Delete another 5 lines from setup.html to change the size of the file.
  2. Press Ctrl+O, ENTER to save the file, and then press Ctrl+X to exit nano.
  3. Copy the file to the bucket with the -v versioning option:
gcloud storage cp -v setup.html gs://$BUCKET_NAME_1

List all versions of the file

  1. To list all versions of the file, run the following command:
gcloud storage ls -a gs://$BUCKET_NAME_1/setup.html
  1. Highlight and copy the name of the oldest version of the file (the first listed), referred to as [VERSION_NAME] in the next step.
Note: Make sure to copy the full path of the file, starting with gs://
  1. Store the version value in the environment variable [VERSION_NAME].
export VERSION_NAME=<Enter VERSION name here>
  1. Verify it with echo:
echo $VERSION_NAME

Result (this is example output):

gs://BUCKET_NAME_1/setup.html#1584457872853517

Download the oldest, original version of the file and verify recovery

  1. Download the original version of the file:
gcloud storage cp $VERSION_NAME recovered.txt
  1. To verify recovery, run the following commands:
ls -al setup.html ls -al recovered.txt Note: You have recovered the original file from the backup version. Notice that the original is bigger than the current version because you deleted lines.

Task 7. Synchronize a directory to a bucket

Make a nested directory and sync with a bucket

Make a nested directory structure so that you can examine what happens when it is recursively copied to a bucket.

  1. Run the following commands:
mkdir firstlevel mkdir ./firstlevel/secondlevel cp setup.html firstlevel cp setup.html firstlevel/secondlevel
  1. To sync the firstlevel directory on the VM with your bucket, run the following command:
gsutil rsync -r ./firstlevel gs://$BUCKET_NAME_1/firstlevel

Examine the results

  1. In the Cloud Console, on the Navigation menu (Navigation menu icon), click Cloud Storage > Buckets.
  2. Click [BUCKET_NAME_1]. Notice the subfolders in the bucket.
  3. Click on /firstlevel and then on /secondlevel.
  4. Compare what you see in the Cloud Console with the results of the following command:
gcloud storage ls -r gs://$BUCKET_NAME_1/firstlevel
  1. Exit Cloud Shell:
exit

Task 8. Cross-project sharing

Switch to the second project

  1. Open a new incognito tab.
  2. Navigate to console.cloud.google.com to open a Cloud Console.
  3. Click the project selector dropdown in the title bar.
  4. Click All, and then click the second project provided for you in the Qwiklabs Connection Details dialog. Remember that the Project ID is a unique name across all Google Cloud projects. The second project ID will be referred to as [PROJECT_ID_2].

Prepare the bucket

  1. In the Cloud Console, on the Navigation menu (Navigation menu icon), click Cloud Storage > Buckets.
  2. Click Create.
  3. Specify the following, and leave the remaining settings as their defaults:
Property Value (type value or select option as specified)
Name Enter a globally unique name
Location type Region
Region
Access control Fine-grained (object-level permission in addition to your bucket-level permissions)
  1. Note the bucket name. It will be referred to as [BUCKET_NAME_2] in the following steps.
  2. Click Create.

Upload a text file to the bucket

  1. Upload a file to [BUCKET_NAME_2]. Any small example file or text file will do.
  2. Note the file name (referred to as [FILE_NAME]); you will use it later.

Create an IAM Service Account

  1. In the Cloud Console, on the Navigation menu (Navigation menu icon), click IAM & admin > Service accounts.
  2. Click Create service account.
  3. On Service account details page, specify the Service account name as cross-project-storage.
  4. Click Create and Continue.
  5. On the Service account permissions page, specify the role as Cloud Storage > Storage Object Viewer.
  6. Click Continue and then Done.
  7. Click the cross-project-storage service account to add the JSON key.
  8. In Keys tab, click Add Key dropdown and select Create new key.
  9. Select JSON as the key type and click Create. A JSON key file will be downloaded. You will need to find this key file and upload it in into the VM in a later step.
  10. Click Close.
  11. On your hard drive, rename the JSON key file to credentials.json.
  12. In the upper pane, switch back to [PROJECT_ID_1].

Click Check my progress to verify the objective. Create the resources in the second project

Create a VM

  1. On the Navigation menu (Navigation menu icon), click Compute Engine > VM instances.
  2. Click Create Instance.
  3. Specify the following, and leave the remaining settings as their defaults:
Property Value (type value or select option as specified)
Name crossproject
Region
Zone
Series E2
Machine type e2-medium
Boot disk Debian GNU/Linux 11 (bullseye)
  1. Click Create.

SSH to the VM

  1. For crossproject, click SSH to launch a terminal and connect.
Note: If the message appears like Connection via Cloud Identity-Aware Proxy Failed then click Connect without Identity-Aware Proxy.
  1. Store [BUCKET_NAME_2] in an environment variable:
export BUCKET_NAME_2=<enter bucket name 2 here>
  1. Verify it with echo:
echo $BUCKET_NAME_2
  1. Store [FILE_NAME] in an environment variable:
export FILE_NAME=<enter FILE_NAME here>
  1. Verify it with echo:
echo $FILE_NAME
  1. List the files in [PROJECT_ID_2]:
gcloud storage ls gs://$BUCKET_NAME_2/

Result (this is example output):

AccessDeniedException: 403 404513585876-compute@developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.

Authorize the VM

  1. To upload credentials.json through the SSH VM terminal, click on the up arrow icon (upload icon) in the upper-right corner, and then click Upload file.
  2. Select credentials.json and upload it.
  3. Click Close in the File Transfer window.
  4. Verify that the JSON file has been uploaded to the VM:
ls

Result (this is example output):

credentials.json
  1. Enter the following command in the terminal to authorize the VM to use the Google Cloud API:
gcloud auth activate-service-account --key-file credentials.json

Note: The image you are using has the Google Cloud SDK pre-installed; therefore, you don't need to initialize the Google Cloud SDK.

If you are attempting this lab in a different environment, make sure you have followed these procedures from the Install the gcloud CLI guide regarding installing the Google Cloud SDK.

Verify access

  1. Retry this command:
gcloud storage ls gs://$BUCKET_NAME_2/
  1. Retry this command:
gcloud storage cat gs://$BUCKET_NAME_2/$FILE_NAME
  1. Try to copy the credentials file to the bucket:
gcloud storage cp credentials.json gs://$BUCKET_NAME_2/

Result (this is example output):

Copying file://credentials.json [Content-Type=application/json]... AccessDeniedException: 403 cross-project-storage@qwiklabs-gcp-02-c638e3daa975.iam.gserviceaccount.com does not have storage.objects.create access to the Google Cloud Storage object.

Modify role

  1. In the upper pane, switch back to [PROJECT_ID_2].
  2. In the Cloud Console, on the Navigation menu (Navigation menu icon), click IAM & admin > IAM.
  3. Click the pencil icon for the cross-project-storage service account (You might have to scroll to the right to see this icon).
  4. Click on the Storage Object Viewer role, and then click Cloud Storage > Storage Object Admin.
  5. Click Save. If you don't click Save, the change will not be made.

Click Check my progress to verify the objective. Create and verify the resources in the first project

Verify changed access

  1. Return to the SSH terminal for crossproject.
  2. Copy the credentials file to the bucket:
gcloud storage cp credentials.json gs://$BUCKET_NAME_2/

Result (this is example output):

Copying file://credentials.json [Content-Type=application/json]... - [1 files][ 2.3 KiB/ 2.3 KiB] Operation completed over 1 objects/2.3 KiB.

Note: In this example the VM in PROJECT_ID_1 can now upload files to Cloud Storage in a bucket that was created in another project.

Note that the project where the bucket was created is the billing project for this activity. That means if the VM uploads a ton of files, it will not be billed to PROJECT_ID_1, but instead to PROJECT_ID_2.

Task 9. Review

In this lab you learned to create and work with buckets and objects, and you learned about the following features for Cloud Storage:

  • CSEK: Customer-supplied encryption key
  • Use your own encryption keys
  • Rotate keys
  • ACL: Access control list
  • Set an ACL for private, and modify to public
  • Lifecycle management
  • Set policy to delete objects after 31 days
  • Versioning
  • Create a version and restore a previous version
  • Directory synchronization
  • Recursively synchronize a VM directory with a bucket
  • Cross-project resource sharing using IAM
  • Use IAM to enable access to resources across projects

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.