This is a summary of new Arm Treasure Data features and improvements introduced in the July 1st, 2019 release. If you have any product feature requests, submit them to feedback.treasuredata.com.
Security: Removing HTTP access to Treasure Data
As part of improving our security, we are terminating http endpoints to TD Console for US and Tokyo regions in mid-July 2019. This change affects access to TD Console only. API access is not affected by this change. You do not need to change any API scripts.
After mid-July, you cannot access TD Console using HTTP and will see an error page. Instead, use HTTPS to access TD Console.
HTTP Endpoints to be terminated:
- US region: http://console.treasuredata.com
- Tokyo region: http://console.treasuredata.co.jp
HTTPS Secure Endpoints:
- US region: https://console.treasuredata.com
- Tokyo region: https://console.treasuredata.co.jp
HTTP access of following public endpoints in US and Tokyo region will be terminated:
- console.treasuredata.com, auth.treasuredata.com, api-console.treasuredata.com
- console.treasuredata.co.jp, auth.treasuredata.co.jp, api-console.treasuredata.co.jp
Premium Audit Log: More Events Logged
Over 55 more events are captured in Premium audit log. When you buy the premium audit log license you can view, filter, and query the audit log.
Review the complete list of events captured. New and updated events are indicated in bold italic.
Custom Scripting for Workflow
Treasure Data offers a powerful extensibility framework by providing for containerized Python scripts to run from within Treasure Workflows. Using the rich set of Python capabilities, you can extend and customize the product to best fit your needs. Custom scripting allows you to consolidate your data management into one environment. You also can productionize your data science work, by enabling Python models to be run as part of regularly scheduled Treasure Workflow data pipelines.
Python Custom Scripting enables you to extend the capabilities of your integration by building Python scripts that can be run within the Treasure Workflow.
Control Panel: TD Console Change Benefits Administrators
The control panel groups administrator-level account maintenance features together in the Treasure Data Console. This improved navigation is now implemented on all Treasure Data systems.
Some task steps in the administrative area and user profile area are impacted by this user interface change. You can review a summary of changes.
You can view a video walkthru of the control panel: .
Data Connector: Export to Criteo (Alpha)
As part of our partnership with Criteo, we have a new integration that enables you to update audiences within Criteo so that you can retarget customers. You can target both emails and cookie IDs.
Contact Technical Support at email@example.com to participate in the alpha and to sync cookies with Criteo. You cannot see documentation on this connector unless you participate in the alpha program.
Data Connector: Export to Salesforce Marketing Cloud - Asynchronous APIs
In addition to supporting Synchronous APIs, we now offer Asynchronous APIs to better handle larger data sets. Asynchronous APIs provide higher availability and reliability in comparison to Synchronous APIs.
Note: To use the Asynchronous APIs, you must ensure that your SFMC account is enabled as a Data Extensions Async REST API from Salesforce Marketing Cloud.
Data Connector: S3 Import Improvements
You can use this feature to reduce the complexity of preparing the ingested files on your s3 bucket. You have the option to switch to chronological order when ingesting the files. Adding the feature to retrieve files using a modified date or created date makes it easier to use the S3 data connector for incremental loading. You can also opt to skip processing objects stored in Amazon Glacier storage class.
For more information, see Amazon S3 Import.
Data Connector: Import from Microsoft OneDrive (Beta)
You can now ingest your data files from the cloud storage of OneDrive. This integration supports:
- Ingesting Files from a specific folder
- Incremental by files modified time
- Ingesting shared Files
Contact Technical Support at firstname.lastname@example.org to learn more. You cannot see documentation on this connector unless you participate in the beta program.
Data Connector: Import from Dynalyst
You can import data from Dynalyst into Treasure Data. Dynalyst is a dynamic re-targeting ad delivery platform that specializes in game applications. For more information, see Dynalyst.
Postback: Appsflyer Available in Japan
Mobile SDK: Fetching from Profiles API
Reduce the hassle of manually writing code to access the Profiles API. Use our new function in Mobile SDK: fetchUserSegments. For more information, see Profiles API in iOS SDK and Profiles API in Android SDK.
Contact your Technical Support at email@example.com to learn more.
Continuing Improvements: Error and Informational Messages
Improvement to the look and feel of TD Console continues. Error and Informational messages are color coded and easier to view:
Required Migration: Salesforce Data Connector
The Salesforce V2 data connector is available. The V2 connector is more efficient - capable of asynchronous data transfer - with minimal impact to current users.
The Salesforce V2 data connector replaces the current Salesforce data connector. In TD Console, the existing SFDC connector is now labelled "Salesforce Legacy" and the V2 data connector is labelled "Salesforce."
The legacy SFDC connector will be unavailable after July 31, 2019.
If you are currently using the Salesforce data connector, you must complete migration steps to ensure that your integration continues to work. You must move from the legacy Salesforce data connector to the Salesforce V2 data connector.
New users of the Salesforce data connector can start using the Salesforce V2 data connector immediately.
Contact your Technical Support at firstname.lastname@example.org to learn more.
Query Engines: TD Hive 2019.1 Ready to Release
TD Hive 2019.1 is a major update to Hive-on-TD functionality. It brings compatibility with Apache Hive 2.3 to the Arm Treasure Data platform. Production release of TD Hive 2019.1 is expected this month, June 2019.
As with our current production release, Hive 0.13, TD Hive 2019.1 will continue to generate MapReduce jobs. These jobs will execute on the same cluster and using the same execution resources as current Hive 0.13 jobs. Performance is similar, and Hive 2019.1 fixes all known outstanding bugs for greater reliability.
Future TD Hive releases will build on TD Hive 2019.1 to add a number of performance-oriented capabilities, such as:
- Capacity Scheduler, which ensures that no job is ultimately starved for resources
- the Tez runtime, which supersedes MapReduce and promises substantial performance improvement
- Vectorized implementations of many complex functions, to allow for vectorized execution across a query
- Full user-defined partitioning support in Hive, comparable to Presto
Most queries from Hive 0.13 will run in Hive 2019.1 unchanged. Some queries will need minor modifications to use ANSI SQL syntax because Hive 2019.1 is stricter.
NOTE: Hive 0.13 queries will be supported for an extended period. While we recommend the new Hive releases for future workloads, you have time to migrate your existing workloads, as needed by your business.