This content has moved to https://go.treasuredata.com/tddocs_reference
About Hive in Treasure Data
The Treasure Data Hive service (TD Hive) provides batch data processing on data stored in Treasure Data’s data lake, based on Apache Hive.
TD eliminates the need to run your own Hadoop clusters to handle Hive processing. Instead, Arm Treasure Data operates compute clusters for running Hive jobs.
The CDP application uses Hive jobs for some of its internal operations, and you can also run your own Hive jobs. You can submit SELECT or DML queries using Hive’s query language, using TD Console, API calls, the TD Toolbelt, or from TD workflows. The service queues, executes the queries, and returns the results. You can also design your system so that results are delivered to destinations specified in your Result Output.
TD Hive supports the flexible schema capabilities of the TD platform; schema can be inferred from data loaded into tables or can be set explicitly.
Hive 0.13 and Hive 2 Open Source Documentation
The standard Hive 0.13 HiveQL and Hive 2.x ANSI SQL are documented together in the open source projects. For full information, see https://hive.apache.org/.