json数据准备
[{"name":"zhangsan" ,"age":18} ,{"name":"lisi" ,"age":15}]
pom. xml
如果是聚合工程的话,建议这个依赖放在父工程上,如果放在子工程的话会出现问题.
<dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.49</version></dependency>
代码
在写的时候如果表不存在的话会自动创建表.
package com.jdbcimport java.util.Propertiesimport org.apache.spark.sql.{SaveMode, SparkSession}object JDBCWrite {val url = "jdbc:mysql://zjj101:3306/ry_vue"val user = "root"val pw = "root"def main(args: Array[String]): Unit = {val spark: SparkSession = SparkSession.builder().master("local[*]").appName("JDBCWrite").getOrCreate()val df = spark.read.json("E:\\ZJJ_SparkSQL\\demo01\\src\\main\\resources\\users.json")//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!写数据!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!// 写到jdbc中// 如果没有这个表的话会自动进行创建// 如果表存在的话就可以用别的方式df.write.format("jdbc").option("url", url) //MySQL链接.option("user", user) //账号.option("password", pw) //密码.option("dbtable", "user") //数据库// .mode("append") //追加写的意思// .mode(SaveMode.Append) //追加写的意思.mode(SaveMode.Overwrite) //重写,覆盖原来的..save()//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!读数据!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!// 从指定表里面读取数据//下面是连接jdbc的另外一种方式val props = new Properties()props.put("user", "root")props.put("password", "root")val df2 = spark.read.jdbc("jdbc:mysql://zjj101:3306/ry_vue", "user", props)df2.show //读取操作spark.close()}}
输出
+---+--------+|age| name|+---+--------+| 18|zhangsan|| 15| lisi|+---+--------+
