Bulk Load csv json avro files to aws redshift

Arpit Garg
3 min readMay 11, 2021

Summary — In this blog i will load same data (which is split into multiple files) from csv, avro and json format into aws redshift table and see difference in load timings based on redshift cluster size.

Details — I have citibike trips data available in csv, json and avro format in aws S3. I will load this data without any configuration changes to redshift table and capture timings and also see impact on load time when i change redshift cluster configurations

Load timings recorded — *(no other load was running on cluster. I also captured timings for copy command with COMPUPDATE OFF and STATUPDATE OFF for some cases)

Bulk Load CSV files

drop table citibike_trips_csv;CREATE TABLE citibike_trips_csv(tripduration INTEGER ,starttime DATETIME ,stoptime DATETIME ,start_station_id INTEGER ,start_station_name VARCHAR(100)  ,start_station_latitude DOUBLE PRECISION,start_station_longitude DOUBLE PRECISION,end_station_id INTEGER ,end_station_name VARCHAR(100)  ,end_station_latitude DOUBLE PRECISION,end_station_longitude DOUBLE PRECISION,bikeid INTEGER ,usertype VARCHAR(50)  ,birth_year VARCHAR(10)  ,gender VARCHAR(10));copy citibike_trips_csv(tripduration,starttime,stoptime,start_station_id,start_station_name,start_station_latitude,start_station_longitude,end_station_id,end_station_name,end_station_latitude,end_station_longitude,bikeid,usertype,birth_year,gender)from 's3://agbucket02/nyc_citibike_trip_csv'iam_role 'arn:aws:iam::yourroleID:role/yourrolename'csv gzip IGNOREHEADER 1 IGNOREBLANKLINES;

(to turn off stat computation add STATUPDATE OFF COMPUPDATE OFF)

Avg size of each file — 15.6MB (compressed)

Avg rowcount in each file — 584544

Total rowcount loaded — 33319019

Bulk Load JSON files — We need a jsonpaths file as below.

jsonpaths.txt{"jsonpaths": ["$.tripduration","$.starttime","$.stoptime","$.start_station_id","$.start_station_name","$.start_station_latitude","$.start_station_longitude","$.end_station_id","$.end_station_name","$.end_station_latitude","$.end_station_longitude","$.bikeid","$.usertype","$.birth_year","$.gender"]}drop table if exists citibike_trips_json;CREATE TABLE citibike_trips_json(tripduration INTEGER  encode zstd,starttime DATETIME  encode zstd,stoptime DATETIME  encode zstd,start_station_id INTEGER  encode zstd,start_station_name VARCHAR(100) encode zstd,start_station_latitude DOUBLE PRECISION encode zstd,start_station_longitude DOUBLE PRECISION encode zstd,end_station_id INTEGER  encode zstd,end_station_name VARCHAR(100) encode zstd,end_station_latitude DOUBLE PRECISION encode zstd,end_station_longitude DOUBLE PRECISION encode zstd,bikeid INTEGER encode zstd,usertype VARCHAR(50) encode zstd,birth_year VARCHAR(10) encode zstd,gender VARCHAR(10)  encode zstd);

Load Json

copy citibike_trips_json
from 's3://agbucket02/nyc_citybike_json/'
iam_role 'arn:aws:iam::yourroleid:role/yourrolename
FORMAT AS JSON 's3://agbucket02/jsonpaths.txt' GZIP;
copy citibike_trips_json
from 's3://agbucket02/nyc_citybike_json/'
iam_role 'arn:aws:iam::yourroleid:role/yourrolename FORMAT AS JSON 's3://agbucket02/jsonpaths.txt' GZIP STATUPDATE OFF COMPUPDATE OFF;

Avg size of each file — 18.8MB (compressed)

Avg rowcount in each file — 584544

Total rowcount loaded — 33319019

Bulk Load AVRO files — we will use the same jsonpaths file as we used for json load.

CREATE TABLE citibike_trips_avro(tripduration INTEGER ,starttime BIGINT,stoptime BIGINT ,start_station_id INTEGER ,start_station_name VARCHAR(100),start_station_latitude DOUBLE PRECISION,start_station_longitude DOUBLE PRECISION,end_station_id INTEGER ,end_station_name VARCHAR(100),end_station_latitude DOUBLE PRECISION,end_station_longitude DOUBLE PRECISION,bikeid INTEGER,usertype VARCHAR(50),birth_year VARCHAR(10),gender VARCHAR(10));truncate table citibike_trips_avro;copy citibike_trips_avrofrom 's3://agbucket02/nyc_citibike_trip_avro/'iam_role 'arn:aws:iam::yourroleid:role/yourrolename'FORMAT AS AVRO 's3://agbucket02/jsonpaths.txt';

Avg size of each file — 72.7MB

Avg rowcount in each file — 584544

Total rowcount loaded — 33319019

Originally published at http://clouddatablog.blogspot.com.

--

--