当前位置:Gxlcms > 数据库问题 > spark sql 中的结构化数据

spark sql 中的结构化数据

时间:2021-07-01 10:21:17 帮助过:21人阅读

1. 连接mysql

 首先需要把mysql-connector-java-5.1.39.jar 拷贝到 spark 的jars目录里面;


scala> import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext

scala> val sqlContext=new SQLContext(sc)
warning: there was one deprecation warning; re-run with -deprecation for details
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@3a649f9a

scala>  sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:mysql://localhost:3306/metastore",
     |  "driver" -> "com.mysql.jdbc.Driver", "dbtable" -> "DBS", "user" -> "root", "password" -> "root")).load().show
+-----+--------------------+--------------------+-------+----------+----------+
|DB_ID|                DESC|     DB_LOCATION_URI|   NAME|OWNER_NAME|OWNER_TYPE|
+-----+--------------------+--------------------+-------+----------+----------+
|    1|Default Hive data...|hdfs://localhost:...|default|    public|      ROLE|
|    2|                null|hdfs://localhost:...|    aaa|      root|      USER|
|    6|                null|hdfs://localhost:...| userdb|      root|      USER|
+-----+--------------------+--------------------+-------+----------+----------+

 

spark sql 中的结构化数据

标签:Owner   ons   dep   show   spark sql   use   load   import   aaa   

人气教程排行