PySpark read and write to Phoenix table
A simple question, is it possible to load a pyspark data frame to a phoenix table. Well answer is YES. Let's see how can we do that. To push data from pyspark data frame to Phoenix table we need to have an existing phoenix table created through the Phoenix but not HBASE shell. Since, often times tables created from hbase shell doesn't appear in phoenix until you create a new table or view in phoenix and point that to existing hbase table. Instead to avoid those kind of discrepancies I would suggest to load data into hbase using phoenix if you willing to use it for querying. If table not exists, use the below command to create the table in Phoenix. CREATE TABLE IF NOT EXISTS <TABLE_NAME> ( ROWKEY <DATA_TYPE> NOT NULL PRIMARY KEY, <COL_FAMILY>.<COL_NAME> <DATA_TYPE>, <COL_FAMILY>.<COL_NAME> <DATA_TYPE>, - - - - - - - - - - - - - - -up to n number of columns); NOTE: If you're creating a phoenix table to point an exist...