In this tutorial, you create a workflow that runs a saved job on Treasure Data to import data, runs a query, and then exports query results to an external database or service.
In this tutorial, you create a workflow project from scratch. However, the workflow definition created in this tutorial can be used both from the CLI or GUI.
Create Workflow Directory
# this creates a director called wf_of_saved_queries $ mkdir wf_of_saved_queries
Add the workflow definition file
Now add the workflow file itself. In this case, you need to replace the
your query's saved name in Treasure Data.
$ cat > saved_queries.dig <<EOF _export: td: database: workflow_temp +data_load_task: td_load>: <replace_with_saved_data_connector_job_name> +query_task: td_run>: <replace_with_saved_query_name> EOF
To find a saved data connector job, you can issue the following command using td-toolbelt via CLI:
$ td connector:list
To find a saved query to use, go to your console queries section in order to get a few saved queries to add. You can just copy the names you want to use directly from this page: https://console.treasuredata.com/app/queries.
Now you can run your workflow!
$ td wf run saved_queries
Just like any other workflow, you can now add a schedule to this workflow and submit to Treasure Data to run on a regular basis.
If you have any feedback we welcome hearing your thoughts on our TD Workflows ideas forum.
Also, if you have any ideas or feedback on the tutorial itself, we’d welcome them here!