This content has moved to https://go.treasuredata.com/tddocs_reference
A number of Hive features familiar from other contexts are not relevant or not supported in Treasure Data.
Hadoop/HDFS data partitioning
In a standard Hadoop/Hive cluster, users include more explicit information about partitioning and bucketing data and directory hierarchies for partition placement. Hive features like static and dynamic partitioning depend on this data being provided in each query. Our platform manages many of the storage details transparently.
Treasure Data does support time-based and user-defined partitioning.
However other Hive standard partitioning mechanisms are not relevant and are not supported.
DDL Operations Performed External to Hive
Creating data schema objects using Hive DDL is limited because TD manages the creation of tables and definition of schema external to the Hive query language. For example, TD infers schema from ingested data. As a convenience some Hive DDL features are supported but other components in the platform may also manipulate schema.
For more About supported Hive DDL.
Hive Data Types and TD Native Data Types
While the full range Hive data types can be used in any expression, when inserting data values into TD tables, you may have to CAST the Hive data type to one supported for storing values in TD. Upgrading to Hive 2
Hive Metastore Not Needed
Treasure Data’s platform manages table schema and partition metadata in its own metastore instead of using a standard Hive Metastore. This supports our flexible table schema capabilities. This is only a limitation if you use external tools to work with data lakes that rely on your TD Hive metastore.
Security and Authentication
The security and authentication model for Hive in our platform follows the security and authentication model of the platform itself.