Machine Learning & Big Data Blog

AWS Glue ETL Transformations

Walker Rowe
2 minute read
Walker Rowe

In this article, we explain how to do ETL transformations in Amazon’s Glue. For background material please consult How To Join Tables in AWS Glue. You first need to set up the crawlers in order to create some data.

By this point you should have created a titles DynamicFrame using this code below. Now we can show some ETL transformations.

from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.transforms import *

glueContext = GlueContext(SparkContext.getOrCreate())

titles = glueContext.create_dynamic_frame.from_catalog(database="moviesandratings", table_name="movieswalker")

Select fields

This ETL transformation creates a new DynamicFrame by taking the fields in the paths list. We use toDF().show() to turn it into Spark Dataframe and print the results.



The map function iterates over every record (called a DynamicRecord) in the DynamicFrame and runs a function over it.

First create a function that takes a DynamicRecord as an argument and returns the DynamicRecord. Here we take one column and make it uppercase:

def upper(rec):
    return rec 

Then call that function on the DynamicFrame titles.


Apply mapping

This method changes column names and types. mappings is an array of tuples ("oldName", "oldType", "newName", "newType").


A Dynamic Frame collection is a dictionary of Dynamic Frames. We can create one using the split_fields function. Then you can run the same map, flatmap, and other functions on the collection object. Glue provides methods for the collection so that you don’t need to loop through the dictionary keys to do that individually.

Here we create a DynamicFrame Collection named dfc. The first DynamicFrame splitoff has the columns tconst and primaryTitle. The second DynamicFrame remaining holds the remaining columns.

>>> dfc.keys()
dict_keys(['splitoff', 'remaining'])

We show the DynamicFrame splitoff below

|    tconst|       primaryTitle|
| tt0276132|      The Fetishist|
| tt0279481|        Travel Daze|
| tt0305295|        Bich bozhiy|

Create a Dynamic DataFrame from a Spark DataFrame

As we can turn DynamicFrames into Spark Dataframes, we can go the other way around. We can create data by first creating a Spark Dataframe and then using the fromDF function.

We use the Apache Spark SQL Row object.

from pyspark.sql import *
walker = Row(name='Walker',age=59)
stephen = Row(name='Stephen', age=40)


Additional resources

Explore these resources:

Automate workflows to simplify your big data lifecycle

In this e-book, you’ll learn how you can automate your entire big data lifecycle from end to end—and cloud to cloud—to deliver insights more quickly, easily, and reliably.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing

BMC Bring the A-Game

From core to cloud to edge, BMC delivers the software and services that enable nearly 10,000 global customers, including 84% of the Forbes Global 100, to thrive in their ongoing evolution to an Autonomous Digital Enterprise.
Learn more about BMC ›

About the author

Walker Rowe

Walker Rowe

Walker Rowe is an American freelancer tech writer and programmer living in Cyprus. He writes tutorials on analytics and big data and specializes in documenting SDKs and APIs. He is the founder of the Hypatia Academy Cyprus, an online school to teach secondary school children programming. You can find Walker here and here.