AWS Glue截断Redshift表
我创建了一个Glue作业,可以将数据从S3(csv文件)复制到Redshift.它可以正常工作并填充所需的表.
I have created a Glue job that copies data from S3 (csv file) to Redshift. It works and populates the desired table.
但是,在此过程中,我需要清除表,因为在该过程完成后,我会留下重复的记录.
However, I need to purge the table during this process as I am left with duplicate records after the process completes.
我正在寻找一种将清除功能添加到胶水工艺中的方法.任何建议将不胜感激.
I'm looking for a way to add this purge to the Glue process. Any advice would be appreciated.
谢谢.
您需要修改Glue提供的自动生成的代码.使用spark jdbc连接连接到redshift并执行清除查询.
You need to modify the auto generated code provided by Glue. Connect to redshift using spark jdbc connection and execute the purge query.
在redshift VPC中旋转Glue容器;在胶粘作业中指定连接,以获取对Redshift集群的访问权限.
To spin up Glue containers in redshift VPC; specify the connection in glue job, to gain access for redshift cluster.
希望这会有所帮助.