Use this data connector to directly import data from your FTP server to Arm Treasure Data.
Prerequisites
- Basic knowledge of Treasure Data
Configure the Connection
You can create an instance of the FTP data connector from the TD Console. Go to Catalog and search and select FTP. Click Create on the FTP connector.
Create a new FTP Connector
Enter the required credentials for your remote FTP instance. Set the following parameters.
- Host: The host information of the remote FTP instance, for example an IP address.
- Port: The connection port on the remote FTP instance the default is 21.
- User: The user name used to connect to the remote FTP instance.
- Password: The password used to connect to the remote FTP instance.
- Passive mode: Use passive mode (default: checked)
- ASCII mode: Use ASCII mode instead of binary mode (boolean, default: unchecked)
- Use FTPS/FTPES: Use FTPS (SSL encryption). (boolean, default: unchecked)
Click Continue after entering the required connection details. Name the connection so you can easily find it later should you need to modify any of the connection details. Note: If you would like to share this connection with other users in your organization, check the Share with others
checkbox. If this box is unchecked this connection is visible only to you.
Click Create Connection to complete the connection. If the connection is a success, then the connection you just created appears in your list of connections with the name you provided.
Transfer data into Treasure Data.
To get the data from your FTP server into Treasure Data, you can set up an ad hoc one time transfer or a recurring transfer at a regular interval. In this section, the following steps are required.
Enter FTP Server details.
Provide the details of the database and table that you want to ingest data from.
- Path prefix: Prefix of target files (string, required)
- Incremental enables incremental loading (boolean, optional. default: true. If incremental loading is enabled, the config diff for the next execution will include last_path parameter so that next execution skips files before the path. Otherwise, last_path is not included.
Click Next to preview the data in the next step.
Preview
If there are no errors with the connection, you see a preview of the data to be imported. If you are unable to see the preview or have any issues viewing the preview, contact support.
If you need to use non-standard options for your import, select Advanced Settings.
Advanced Settings
Advanced Settings allow to you modify aspects of your transfer to allow for special requirements.
Transfer to
- Mode: Append – Allows you to add records into existing table.
- Mode: Replace – Replace the existing data in the table with the data being imported.
- Partition key Seed: Choose the long or timestamp column that you would like to use as the partitioning time column. If you do not specify a time column, the upload time of the transfer is used in conjunction with the addition of a
time
column.
Data Transfer Frequency.
You can choose to run the transfer only one time or schedule it to run on a given frequency of your choosing.
- When
- Once now: Run the transfer only once.
- Repeat…
- Schedule: accepts these three options:
@hourly
,@daily
and@monthly
and customcron
. - Delay Transfer: add a delay of execution time.
- Schedule: accepts these three options:
- Scheduling Timezone: Timezone the data is stored in; data will also be displayed in this timezone. Supports extended timezone formats like ‘Asia/Tokyo’.
After selecting the frequency, click Start Transfer to begin the transfer. If there are no errors, the transfer into Treasure Data will complete and the data will be available.
My Input Transfers
If you need to review the transfer you have just completed for future data connector jobs, you can view a list of your transfers in the My Input Transfers
section. You can also edit the details of data transfers. If you want to see details or logs of jobs, click Last Transfer.
Optional Alternative: Use the CLI to Configure the Connector
You can also use the FTP data connector from the command line interface. The following instructions show you how to import data using the CLI.
Install ‘td’ command v0.11.9 or later
Install the most current Treasure Data Toolbelt.
$ td --version 0.11.10
Create Seed Config File (seed.yml)
First, prepare seed.yml as shown in the following example, with your FTP details. You must also specify bucket name, and target file name (or prefix for multiple files).
in: type: ftp host: ftp.example.net port: 21 user: anonymous password: XXXX path_prefix: /ftp/file/path/prefix # path of the *.csv or *.tsv file on your FTP server out: mode: append
The Data Connector for FTP imports all files that match the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz).
Active Mode is NOT supported. |
If you’re using FTPS, specify additional details as follows:
in: type: ftp host: ftp.example.net port: 21 user: anonymous password: "mypassword" path_prefix: /ftp/file/path/prefix ssl: true ssl_verify: false out: mode: append
For more details on available out modes, see Appendix.
Guess Fields (Generate load.yml)
Use connector:guess. This command automatically reads the source file, and assesses (uses logic to guess) the file format.
$ td connector:guess seed.yml -o load.yml
If you open load.yml, you’ll see the assessed file format definitions including file formats, encodings, column names, and types.
in: type: ftp host: ftp.example.net port: 21 user: anonymous password: XXXX path_prefix: /ftp/file/path/prefix parser: charset: UTF-8 newline: CRLF type: csv delimiter: ',' quote: '"' escape: '' skip_header_lines: 1 columns: - name: id type: long - name: company type: string - name: customer type: string - name: created_at type: timestamp format: '%Y-%m-%d %H:%M:%S' out: mode: append
Then, you can preview how the system will parse the file by using the preview command.
$ td connector:preview load.yml +-------+---------+----------+---------------------+ | id | company | customer | created_at | +-------+---------+----------+---------------------+ | 11200 | AA Inc. | David | 2015-03-31 06:12:37 | | 20313 | BB Imc. | Tom | 2015-04-01 01:00:07 | | 32132 | CC Inc. | Fernando | 2015-04-01 10:33:41 | | 40133 | DD Inc. | Cesar | 2015-04-02 05:12:32 | | 93133 | EE Inc. | Jake | 2015-04-02 14:11:13 | +-------+---------+----------+---------------------+
The guess command needs more than 3 rows and 2 columns in source data file, because the command assesses the column definition using sample rows from source data. |
If the system detects your column name or column type unexpectedly, modify load.yml directly and preview again.
Currently, the data connector supports parsing of “boolean”, “long”, “double”, “string”, and “timestamp” types.
You also must create a local database and table prior to executing the data load job. Follow these steps: |
$ td database:create td_sample_db $ td table:create td_sample_db td_sample_table
Execute Load Job
Finally, submit the load job. It may take a couple of hours depending on the size of the data. Specify the Treasure Data database and table where the data should be stored.
It’s also recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time (see data partitioning) If the option is not provided, the data connector chooses the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of long and timestamp type.
If your data doesn’t have a time column you can add a time column by using add_time filter option. For more details see add_time filter plugin
$ td connector:issue load.yml --database td_sample_db --table td_sample_table \ --time-column created_at
The connector:issue command assumes that you have already created a database(td_sample_db) and a table(td_sample_table). If the database or the table do not exist in TD, the connector:issue command fails, so create the database and table manually or use --auto-create-table option with td connector:issue command to auto create the database and table:
$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-table
At present, the data connector does not sort records on server-side. To use time-based partitioning effectively, sort records in files beforehand. |
If you have a field called time, you don’t have to specify the --time-column option.
$ td connector:issue load.yml --database td_sample_db --table td_sample_table
Scheduled execution
You can schedule periodic data connector execution for incremental FTP file import. We take great care in distributing and operating our scheduler to achieve high availability. By using this feature, you no longer need a cron daemon on your local data center.
For the scheduled import, the Data Connector for FTP imports all files that match with the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz) at first and remembers the last path (path/to/sample_201505.csv.gz) for the next execution.
On the second and on subsequent runs, the connector imports only files that comes after the last path in alphabetical (lexicographic) order. (path/to/sample_201506.csv.gz, …)
Create the schedule
A new schedule can be created using the td connector:create command. The following are required: the name of the schedule, the cron-style schedule, the database and table where the data will be stored, and the Data Connector configuration file.
$ td connector:create \ daily_import \ "10 0 * * *" \ td_sample_db \ td_sample_table \ load.yml
It’s also recommended to specify the --time-column option, because Treasure Data’s storage is partitioned by time (see data partitioning).
$ td connector:create \ daily_import \ "10 0 * * *" \ td_sample_db \ td_sample_table \ load.yml \ --time-column created_at
The `cron` parameter also accepts three special options: `@hourly`, `@daily` and `@monthly`. |
By default, schedule is setup in UTC timezone. You can set the schedule in a timezone using -t or --timezone option. Note that `--timezone` option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles' etc. Timezone abbreviations like PST, CST are *not* supported and may lead to unexpected schedules.
List the Schedules
You can see the list of currently scheduled entries by running the command td connector:list.
$ td connector:list +--------------+--------------+----------+-------+--------------+-----------------+-------------------------------------------+ | Name | Cron | Timezone | Delay | Database | Table | Config | +--------------+--------------+----------+-------+--------------+-----------------+-------------------------------------------+ | daily_import | 10 0 * * * | UTC | 0 | td_sample_db | td_sample_table | {"in"=>{"type"=>"ftp", "access_key_id"... | +--------------+--------------+----------+-------+--------------+-----------------+-------------------------------------------+
Show the Setting and Schedule History
td connector:show shows the execution setting of a schedule entry.
% td connector:show daily_import Name : daily_import Cron : 10 0 * * * Timezone : UTC Delay : 0 Database : td_sample_db Table : td_sample_table Config --- in: type: ftp host: ftp.example.net port: 21 user: anonymous password: XXXX path_prefix: /ftp/file/path/prefix parser: charset: UTF-8 ...
td connector:history shows the execution history of a schedule entry. To investigate the results of each individual run, use td job <jobid>.
% td connector:history daily_import +--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+ | JobID | Status | Records | Database | Table | Priority | Started | Duration | +--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+ | 578066 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-18 00:10:05 +0000 | 160 | | 577968 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-17 00:10:07 +0000 | 161 | | 577914 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-16 00:10:03 +0000 | 152 | | 577872 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-15 00:10:04 +0000 | 163 | | 577810 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-14 00:10:04 +0000 | 164 | | 577766 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-13 00:10:04 +0000 | 155 | | 577710 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-12 00:10:05 +0000 | 156 | | 577610 | success | 10000 | td_sample_db | td_sample_table | 0 | 2015-04-11 00:10:04 +0000 | 157 | +--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+ 8 rows in set
Delete the Schedule
td connector:delete removes the schedule.
$ td connector:delete daily_import
Appendix
Modes for out plugin
You can specify file import mode in out section of seed.yml.
append (default)
This is the default mode and records are appended to the target table.
in: ... out: mode: append
replace (In td 0.11.10 and later)
This mode replaces data in the target table. Note that any manual schema changes made to the target table remain intact with this mode.
in: ... out: mode: replace
FAQ for the FTP Data Connector
Q: I can’t connect to my FTP server, what can I do?
- Check what is valid protocol. If you intend to FTP or FTPS, you can use this Data Connector for FTP. If SFTP, try connect with SFTP Data Connector.
- If you’re using a firewall, check your accepted IP range/port. Server administrators sometimes change the default port number for the security reason.
- Be sure that FTP uses TCP/21 as the default control port but also uses any TCP ports as a data transfer port when you’re using passive mode. This port range will depend on your server’s settings.
- Check that you’re connecting with passive mode. activeP mode generally doesn’t work because it establishes the connection from the FTP server side.
- If you’re using FTPS, there are 2 modes Explicit and Implicit. Explicit mode is widely used.
Q: How do I troubleshoot data import problems?
Review the job log. Warning and errors provide information about the success of your import. For example, you can identify the source file names associated with import errors.
Comments
0 comments
Please sign in to leave a comment.