Complete the following steps to migrate from the legacy Salesforce data connector to the new Salesforce V2 connector. The legacy data connector uses only REST API to import data. The new Salesforce data connector enables you to use Bulk import and REST API.
Create a New Salesforce v2 Connector
Go to Treasure Data Catalog, then search and select Salesforce v2.
In the dialog box, enter the values that you enter in your legacy Salesforce connector.
Salesforce v2 connector requires that you remove unnecessary letters from Login URL parameter. For example, instead of https://login.salesforce.com/?locale=jp , use use https://login.salesforce.com/ .
Enter your username (your email) and password, as well as your Client ID, Client Secret and Security Token.
Save Settings and Run the Legacy Salesforce Data Connector One Last Time
You can save your legacy setting from TD Console or from the CLI.
In Console UI
Save the settings of your scheduled legacy Salesforce connector and run a final import
Go to Integration Hub > Sources. Search for your scheduled Salesforce source, select and click Edit.
In the dialog box, copy the settings to use later:
Also copy any advanced settings:
Next, you configure one final run with the legacy data connector to create a temporary table against which you can run a config-diff. You use the diff to identify and confirm the latest data imported into Treasure Data.
Before running the final import with the legacy connector, make sure that you change the schedule to one run only:
After the job is complete, look at and copy config_diff in job query information somewhere to use later.
Create new Salesforce v2 source
Go to Integration Hub > Authentication. Search for new Salesforce v2 connection that you created:
Click into New Source. Fill in all basic settings and advanced settings that you copied in the preceding steps. Then, if you want the new source to continue ingesting from the point where the legacy connector left, fill in the Last Record field with the config_diff information that you copied in the previous job.
After completing the settings, choose the database and table job to populate data into, then schedule the job and provide a name for your new data connector. Click Save and then run the new data connector.
In CLI and Workflow
Update in: type
in your yml configuration from sfdc to sfdc_v2.
For example, your existing workflow configuration might look something like this:
in: type: sfdc username: ${secret:sfdc.username} password: ${secret:sfdc.password} client_id: ${secret:sfdc.client_id} client_secret: ${secret:sfdc.client_secret} security_token: ${secret:sfdc.security_token} login_url: ${secret:sfdc.login_url} target: Lead out: {} exec: {} filters: []
Your new workflow configuration would look like this:
in: type: sfdc_v2 username: ${secret:sfdc.username} password: ${secret:sfdc.password} client_id: ${secret:sfdc.client_id} client_secret: ${secret:sfdc.client_secret} security_token: ${secret:sfdc.security_token} login_url: ${secret:sfdc.login_url} target: Lead out: {} exec: {} filters: []
For result output
The SFDC connection is shared between data connector and result output, although there is nothing change in result output, but if you use either of those, you should upgrade it too.
In Console UI
Save the settings of Legacy export connector
Go to Treasure Data console. Go to Query Editor. Open the Query that uses sfdc for its connection
Click the sfdc connector, then copy and save the details of existing connection to use later.
Click delete to remove the Legacy one.
Modify the existing Query (to replace the Legacy connection)
In the query, click Output results. Next, you are going to set up the SFDC v2 connector by finding and select the SFDC v2 export connector that you created.
In Configuration pane, specify the fields you saved in previous step, then click Done.
Check Output results to... to verify that you are using the created output connection. Click Save.
It is strongly recommended to create a test target and use it for the first data export to verify that exported data looks as expected and the new export does not corrupt existing data. In your test case, choose an alternate “Object” for your test target. |
In CLI:
Result type protocol needs to update from sfdc to sfdc_v2 for instance from:
sfdc://<username>:<password><security_token>@<hostname>/<object_name>
to:
sfdc_v2://<username>:<password><security_token>@<hostname>/<object_name>
In workflow:
If you have a workflow that used the SFDC, you can keep your result settings the same, but need to update result_connection to the new connection_name you created above.
An example of old workflow result output settings is as follows:
+td-result-output-sfdc: td>: queries/sample.sql database: sample_datasets result_connection: your_old_connection_name result_settings: object: object_name mode: append concurrency_mode: parallel retry: 2 split_records: 10000
An example of new workflow result output settings is as follows:
+td-result-output-sfdc: td>: queries/sample.sql database: sample_datasets result_connection: your_new_connection_name result_settings: object: object_name mode: append concurrency_mode: parallel retry: 2 split_records: 10000
Comments
0 comments
Please sign in to leave a comment.