You can use td-pyspark to bridge the results of data manipulations in Databrick with your data in Arm Treasure Data.
Databricks builds on top of Apache Spark providing an easy to use interface for accessing Spark. PySpark is a Python API for Spark. Treasure Data's td-pyspark is a Python library that provides a handy way to use PySpark and Treasure Data based on td-spark.
To follow the steps in this example, you must have the following times:
- Treasure Data API key
- td-spark feature enabled
Configuring your Databricks Environment
You create a cluster, install td-pyspark libraries and configure an notebook for your connection code.
Create a Cluster on Databricks
Click the Cluster icon.
Click Create Cluster.
Provide a cluster name, select version Spark 2.4.3 or later as the Databricks Runtime Version and select 3 as the Python Version.
Install the td-pyspark Libraries
Access the Treasure Data Apache Spark Driver Release Notes. From the article, you click on links to download code.
Click to download
When download completes, you see the following:
Specify your TD API Key and Site
In the Spark configuration, you specify the Treasure Data API key and enter the environment variables.
An example of the format is as follows. You provide the actual values:
spark.td.apikey (Your TD API KEY)
spark.td.site (Your site: us, jp, eu01)
Restart Cluster and Begin Work in Databricks
Restart your Spark cluster. Create a noteboook. Create a script similar to the following code:
%python from pyspark.sql import *
SAMPLE_TABLE = "sample_datasets.www_access"
td = td_pyspark.TDSparkContext(spark)
df = td.table(SAMPLE_TABLE).within("-10y").df()
TDSparkContext is an entry point to access td_pyspark's functionalities. As shown in the preceding code sample, to create TDSparkContext, pass your SparkSession (spark) to TDSparkContext:
td = TDSparkContext(spark)
You see a result similar to the following:
Your connection is working.
Interacting with Treasure Data from Databricks
In Databricks, you can run select and insert queries to Treasure Data or query back data from Treasure Data. You can also create and delete databases and tables.
In Databricks, you can use the following commands:
Read Tables as DataFrames
To read a table, use
df = td.table("sample_datasets.www_access").df() df.show()
Change the Database used in Treasure Data
To change the context database, use
td.use("sample_datasets") # Accesses sample_datasets.www_access df = td.table("www_access").df()
.df() your table data is read as Spark's DataFrame. The usage of the DataFrame is the same with PySpark. See also PySpark DataFrame documentation.
df = td.table("www_access").df()
Submit presto queries
If your Spark cluster is small, reading all of the data as in-memory DataFrame might be difficult. In this case, you can use Presto, a distributed SQL query engine, to reduce the amount of data processing with PySpark.
q = td.presto("select code, * from sample_datasets.www_access") q.show()
q = td.presto("select code, count(*) from sample_datasets.www_access group by 1")
Create or drop a database
Upload DataFrames to Treasure Data
To save your local DataFrames as a table, you have two options:
- Insert the records in the input DataFrame to the target table
- Create or replace the target table with the content of the input DataFrame
Checking Databricks in Treasure Data
You can use td toolbelt to check your database from a command line. Alternatively, if you have TD Console, you can check your databases and queries. Read about Database and Table Management.