โ03-30-2022 06:19 PM
We're trying to move CSV files in an S3 bucket from one path to another. It seems like the standard S3 connector requires the file to be downloaded, reuploaded, and then original file deleted. Does anyone have thoughts on a better way to perform the file move?
Constraint: We do not have an OPA to be able to run AWS CLI commands.
Alternative considered:
I'm thinking of the possibility to use a Snowflake command to copy into an S3 stage from an S3 stage, but not sure how to make that fully dynamic to not need to specify columns in the snowflake command.
COPY INTO @workato_s3_stage/new_path/test.csv
from (select $1 from @workato_s3_stage/original_path/test.csv)
Anyone have any thoughts on the Snowflake command option or any other possible alternatives?
Thanks!
โ03-31-2022 12:20 PM
Instead of connecting directly to the bucket using the S3 connector, you can use AWS Transfer Family service to create an SFTP server on top of your S3 bucket.
Then in Workato, you will be able to use the SFTP connector, which provides an action allowing you to move the file to a different folder.
โ03-31-2022 03:29 PM
You can map the content of the file from the Download step to the content of the file in the Upload step.
Then, you can Delete the original file.