[Sqoop]Sqoop导入与导出

本文涉及的产品
云数据库 RDS MySQL Serverless,0.5-2RCU 50GB
简介: 版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/SunnyYoona/article/details/53151019 1.
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/SunnyYoona/article/details/53151019
1. 导入实例
1.1 登陆数据库查看表
 
 
  1. xiaosi@Qunar:~$ mysql -u root -p
  2. Enter password:
  3. Welcome to the MySQL monitor.  Commands end with ; or \g.
  4. Your MySQL connection id is 8
  5. Server version: 5.6.30-0ubuntu0.15.10.1-log (Ubuntu)
  6. Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
  7. Oracle is a registered trademark of Oracle Corporation and/or its
  8. affiliates. Other names may be trademarks of their respective
  9. owners.
  10. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  11. mysql> use test;
  12. Reading table information for completion of table and column names
  13. You can turn off this feature to get a quicker startup with -A
  14. Database changed
  15. mysql> show tables;
  16. +-----------------+
  17. | Tables_in_test  |
  18. +-----------------+
  19. | employee        |
  20. | hotel_info      |
  21. +-----------------+


1.2 导入操作

我们选择employee这张表进行导入。

 
  
  1. mysql> select * from employee;
  2. +--------+---------+-----------------+
  3. | name | company | depart |
  4. +--------+---------+-----------------+
  5. | yoona | qunar | 创新事业部 |
  6. | xiaosi | qunar | 创新事业部 |
  7. | jim | ali | 淘宝 |
  8. | kom | ali | 淘宝 |
导入的命令非常简单,如下:
 
  
  1. sqoop import --connect jdbc:mysql://localhost:3306/test --table employee --username root -password root -m 1

上面代码是把test数据库下employee表中数据导入HDFS中,运行结果如下:

 
  
  1. 16/11/13 16:37:35 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
  2. 16/11/13 16:37:35 INFO mapreduce.Job: Running job: job_local976138588_0001
  3. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: OutputCommitter set in config null
  4. 16/11/13 16:37:35 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  5. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
  6. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: Waiting for map tasks
  7. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: Starting task: attempt_local976138588_0001_m_000000_0
  8. 16/11/13 16:37:35 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  9. 16/11/13 16:37:35 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  10. 16/11/13 16:37:35 INFO db.DBInputFormat: Using read commited transaction isolation
  11. 16/11/13 16:37:35 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
  12. 16/11/13 16:37:35 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1
  13. 16/11/13 16:37:35 INFO db.DBRecordReader: Executing query: SELECT `name`, `company`, `depart` FROM `employee` AS `employee` WHERE ( 1=1 ) AND ( 1=1 )
  14. 16/11/13 16:37:35 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
  15. 16/11/13 16:37:35 INFO mapred.LocalJobRunner:
  16. 16/11/13 16:37:35 INFO mapred.Task: Task:attempt_local976138588_0001_m_000000_0 is done. And is in the process of committing
  17. 16/11/13 16:37:35 INFO mapred.LocalJobRunner:
  18. 16/11/13 16:37:35 INFO mapred.Task: Task attempt_local976138588_0001_m_000000_0 is allowed to commit now
  19. 16/11/13 16:37:35 INFO output.FileOutputCommitter: Saved output of task 'attempt_local976138588_0001_m_000000_0' to hdfs://localhost:9000/user/xiaosi/employee/_temporary/0/task_local976138588_0001_m_000000
  20. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: map
  21. 16/11/13 16:37:35 INFO mapred.Task: Task 'attempt_local976138588_0001_m_000000_0' done.
  22. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: Finishing task: attempt_local976138588_0001_m_000000_0
  23. 16/11/13 16:37:35 INFO mapred.LocalJobRunner: map task executor complete.
  24. 16/11/13 16:37:36 INFO mapreduce.Job: Job job_local976138588_0001 running in uber mode : false
  25. 16/11/13 16:37:36 INFO mapreduce.Job: map 100% reduce 0%
  26. 16/11/13 16:37:36 INFO mapreduce.Job: Job job_local976138588_0001 completed successfully
  27. 16/11/13 16:37:36 INFO mapreduce.Job: Counters: 20
  28. File System Counters
  29. FILE: Number of bytes read=22247770
  30. FILE: Number of bytes written=22733107
  31. FILE: Number of read operations=0
  32. FILE: Number of large read operations=0
  33. FILE: Number of write operations=0
  34. HDFS: Number of bytes read=0
  35. HDFS: Number of bytes written=120
  36. HDFS: Number of read operations=4
  37. HDFS: Number of large read operations=0
  38. HDFS: Number of write operations=3
  39. Map-Reduce Framework
  40. Map input records=6
  41. Map output records=6
  42. Input split bytes=87
  43. Spilled Records=0
  44. Failed Shuffles=0
  45. Merged Map outputs=0
  46. GC time elapsed (ms)=0
  47. Total committed heap usage (bytes)=241696768
  48. File Input Format Counters
  49. Bytes Read=0
  50. File Output Format Counters
  51. Bytes Written=120
  52. 16/11/13 16:37:36 INFO mapreduce.ImportJobBase: Transferred 120 bytes in 2.4312 seconds (49.3584 bytes/sec)
  53. 16/11/13 16:37:36 INFO mapreduce.ImportJobBase: Retrieved 6 records.

是不是很眼熟,这就是MapReduce作业的输出日志,说明Sqoop导入数据是通过MapReduce作业完成的,并且是没有Reduce任务的MapReduce。为了验证是否导入成功,查看HDFS的目录,执行如下命令:

 
  
  1. xiaosi@Qunar:/opt/hadoop-2.7.2/sbin$ hadoop fs -ls /user/xiaosi
  2. Found 2 items
  3. drwxr-xr-x - xiaosi supergroup 0 2016-10-26 16:16 /user/xiaosi/data
  4. drwxr-xr-x - xiaosi supergroup 0 2016-11-13 16:37 /user/xiaosi/employee
我们发现多出了一个目录,目录名称正好是表名employee,继续查看目录,会发现有两个文件:
 
  
  1. xiaosi@Qunar:/opt/hadoop-2.7.2/sbin$ hadoop fs -ls /user/xiaosi/employee
  2. Found 2 items
  3. -rw-r--r-- 1 xiaosi supergroup 0 2016-11-13 16:37 /user/xiaosi/employee/_SUCCESS
  4. -rw-r--r-- 1 xiaosi supergroup 120 2016-11-13 16:37 /user/xiaosi/employee/part-m-00000

其中,_SUCCESS是代表作业成功的标志文件,输出结果是part-m-00000文件(有可能会输出_logs文件,记录了作业日志)。查看输出文件内容:

 
  
  1. yoona,qunar,创新事业部
  2. xiaosi,qunar,创新事业部
  3. jim,ali,淘宝
  4. kom,ali,淘宝
  5. lucy,baidu,搜索
  6. jim,ali,淘宝
Sqoop导出的数据文件变成了CSV文件(逗号分割)。这时,如果查看执行Sqoop命令的当前文件夹,会发现多了一个employee.java文件,这是Sqoop自动生成的Java源文件。

 
  
  1. xiaosi@Qunar:/opt/sqoop-1.4.6/bin$ ll
  2. 总用量 116
  3. drwxr-xr-x 2 root root 4096 11 13 16:36 ./
  4. drwxr-xr-x 9 root root 4096 4 27 2015 ../
  5. -rwxr-xr-x 1 root root 6770 4 27 2015 configure-sqoop*
  6. -rwxr-xr-x 1 root root 6533 4 27 2015 configure-sqoop.cmd*
  7. -rw-r--r-- 1 root root 12543 11月 13 16:32 employee.java
  8. -rwxr-xr-x 1 root root 800 4 27 2015 .gitignore*
  9. -rwxr-xr-x 1 root root 3133 4 27 2015 sqoop*
  10. -rwxr-xr-x 1 root root 1055 4 27 2015 sqoop.cmd*
  11. -rwxr-xr-x 1 root root 950 4 27 2015 sqoop-codegen*
  12. -rwxr-xr-x 1 root root 960 4 27 2015 sqoop-create-hive-table*
  13. -rwxr-xr-x 1 root root 947 4 27 2015 sqoop-eval*
  14. -rwxr-xr-x 1 root root 949 4 27 2015 sqoop-export*
  15. -rwxr-xr-x 1 root root 947 4 27 2015 sqoop-help*
  16. -rwxr-xr-x 1 root root 949 4 27 2015 sqoop-import*
  17. -rwxr-xr-x 1 root root 960 4 27 2015 sqoop-import-all-tables*
  18. -rwxr-xr-x 1 root root 959 4 27 2015 sqoop-import-mainframe*
  19. -rwxr-xr-x 1 root root 946 4 27 2015 sqoop-job*
  20. -rwxr-xr-x 1 root root 957 4 27 2015 sqoop-list-databases*
  21. -rwxr-xr-x 1 root root 954 4 27 2015 sqoop-list-tables*
  22. -rwxr-xr-x 1 root root 948 4 27 2015 sqoop-merge*
  23. -rwxr-xr-x 1 root root 952 4 27 2015 sqoop-metastore*
  24. -rwxr-xr-x 1 root root 950 4 27 2015 sqoop-version*
  25. -rwxr-xr-x 1 root root 3987 4 27 2015 start-metastore.sh*
  26. -rwxr-xr-x 1 root root 1564 4 27 2015 stop-metastore.sh*
查看源文件看到employee类实现了Writable接口,表名该类的作用是序列化和反序列化,并且该类的属性包含了employee表中的所有字段,所以该类可以存储employee表中的一条记录。

 
  
  1. public class employee extends SqoopRecord implements DBWritable, Writable {
  2. private final int PROTOCOL_VERSION = 3;
  3. public int getClassFormatVersion() { return PROTOCOL_VERSION; }
  4. protected ResultSet __cur_result_set;
  5. private String name;
  6. public String get_name() {
  7. return name;
  8. }
  9. public void set_name(String name) {
  10. this.name = name;
  11. }
  12. public employee with_name(String name) {
  13. this.name = name;
  14. return this;
  15. }
  16. private String company;
  17. public String get_company() {
  18. return company;
  19. }
  20. public void set_company(String company) {
  21. this.company = company;
  22. }
  23. public employee with_company(String company) {
  24. this.company = company;
  25. return this;
  26. }
  27. private String depart;
  28. public String get_depart() {
  29. return depart;
  30. }
  31. public void set_depart(String depart) {
  32. this.depart = depart;
  33. }
  34. public employee with_depart(String depart) {
  35. this.depart = depart;
  36. return this;
  37. }
  38. public boolean equals(Object o) {
  39. if (this == o) {
  40. return true;
  41. }
  42. if (!(o instanceof employee)) {
  43. return false;
  44. }
  45. employee that = (employee) o;
  46. boolean equal = true;
  47. equal = equal && (this.name == null ? that.name == null : this.name.equals(that.name));
  48. equal = equal && (this.company == null ? that.company == null : this.company.equals(that.company));
  49. equal = equal && (this.depart == null ? that.depart == null : this.depart.equals(that.depart));
  50. return equal;
  51. }

2. 导入过程

从前面的样例大致了解到Sqoop是通过MapReducer作业进行导入工作,在做作业中,会从表中读取一行行的记录,然后将其写入HDFS中。

(1)第一步,Sqoop会通过JDBC来获取所需要的数据库元数据,例如,导入表的列名,数据类型等。

(2)第二步,这些数据库的数据类型(varchar, number等)会被映射成Java的数据类型(String, int等),根据这些信息,Sqoop会生成一个与表名同名的类用来完成反序列化工作,保存表中的每一行记录。

(3)第三步,Sqoop启动MapReducer作业

(4)第四步,启动的作业在input的过程中,会通过JDBC读取数据表中的内容,这时,会使用Sqoop生成的类进行反序列化操作

(5)第五步,最后将这些记录写到HDFS中,在写入到HDFS的过程中,同样会使用Sqoop生成的类进行序列化

如上图所示,Sqoop的导入作业通常不只是由一个Map任务完成,也就是说每个任务会获取表的一部分数据,如果只由一个Map任务完成导入的话,那么在第四步时,作业会通过JDBC执行如下SQL:

 
 
  1. select col1, col2,... From table;

这样就能获取表的全部数据,如果由多个Map任务来完成,那就必须对表进行水平切分,水平切分的依据通常会是表的主键。Sqoop在启动MapReducer作业时,会首先通过JDBC查询切分列的最大值和最小值,在根据启动任务数(使用-m命令指定)划分出每个任务所负责的数据,实质上在第四步时,每个任务执行的SQL为:

 
 
  1. select col1, col2,... From table WHERE id > 0 AND id < 50000;
  2. select col1, col2,... From table WHERE id > 5000 AND id < 100000;
  3. ...

使用sqoop进行并行导入的话,切分列的数据分布会很大程度上会影响性能,如果在均匀分布的情况下,性能最好。在最坏的情况下,数据严重倾斜,所有数据都集中在某一个切分区中,那么此时的性能与串行导入性能没有差别,所以在导入之前,有必要对切分列的数据进行抽样检测,了解数据的分布。

Sqoop可以对导入过程进行精细的控制,不用每次都导入一个表的所有字段。Sqoop允许我们指定表的列,在查询中加入WHERE子句,甚至可以自定义查询SQL语句,并且在SQL语句中,可以任意使用目标数据库所支持的函数。

在开始的例子中,我们导入的数据存放到了HDFS中,将这份数据导入Hive之前,必须在Hive中创建该表,Sqoop提供了相应的命令:

 
  
  1. sqoop create-hive-table --connect jdbc:mysql://localhost:3306/test --table employee --username root -password root --fields-terminated-by ','

3. 导出实例

与Sqoop导入功能相比,Sqoop的导出功能使用频率相对较低,一般都是将Hive的分析结果导出到关系数据库中以供数据分析师查看,生成报表等。

在将Hive中表导出到数据库时,必须在数据库中新建一张来接受数据的表,需要导出的Hive表为order_info,如下:

 
  
  1. hive (test)> desc order_info;
  2. OK
  3. uid string
  4. order_time string
  5. business string
  6. Time taken: 0.096 seconds, Fetched: 3 row(s)
我们在mysql中新建一张用于接受数据的表,如下:
 
  
  1. mysql> create table order_info(id varchar(50), order_time varchar(20), business varchar(10));
  2. Query OK, 0 rows affected (0.09 sec)

备注

在Hive中,字符串数据类型为String类型,但在关系性数据库中,有可能是varchar(10),varchar(20),这些必须根据情况自己指定,这也是必须由用户事先将表创建好的原因。

接下来,执行导入操作,执行命令如下:

 
  
  1. sqoop export --connect jdbc:mysql://localhost:3306/test --table order_info --export-dir /user/hive/warehouse/test.db/order_info --username root -password root -m 1 --fields-terminated-by '\t'
对于上面这条导出命令,--connect,--table和--export-dir这三个选项是必须的。其中,export-dir为导出表的HDFS路径,同时将Hive表的列分隔符通过--fields-terminated-by告知Sqoop。上面代码是把Hive的test数据库下ordedr_info表中数据导入mysql中,运行结果如下:
 
  
  1. 16/11/13 19:21:43 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
  2. 16/11/13 19:21:43 INFO mapreduce.Job: Running job: job_local1384135708_0001
  3. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: OutputCommitter set in config null
  4. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter
  5. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: Waiting for map tasks
  6. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: Starting task: attempt_local1384135708_0001_m_000000_0
  7. 16/11/13 19:21:43 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  8. 16/11/13 19:21:43 INFO mapred.MapTask: Processing split: Paths:/user/hive/warehouse/test.db/order_info/order.txt:0+3785
  9. 16/11/13 19:21:43 INFO Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
  10. 16/11/13 19:21:43 INFO Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
  11. 16/11/13 19:21:43 INFO Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
  12. 16/11/13 19:21:43 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
  13. 16/11/13 19:21:43 INFO mapred.LocalJobRunner:
  14. 16/11/13 19:21:43 INFO mapred.Task: Task:attempt_local1384135708_0001_m_000000_0 is done. And is in the process of committing
  15. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: map
  16. 16/11/13 19:21:43 INFO mapred.Task: Task 'attempt_local1384135708_0001_m_000000_0' done.
  17. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local1384135708_0001_m_000000_0
  18. 16/11/13 19:21:43 INFO mapred.LocalJobRunner: map task executor complete.
  19. 16/11/13 19:21:44 INFO mapreduce.Job: Job job_local1384135708_0001 running in uber mode : false
  20. 16/11/13 19:21:44 INFO mapreduce.Job: map 100% reduce 0%
  21. 16/11/13 19:21:44 INFO mapreduce.Job: Job job_local1384135708_0001 completed successfully
  22. 16/11/13 19:21:44 INFO mapreduce.Job: Counters: 20
  23. File System Counters
  24. FILE: Number of bytes read=22247850
  25. FILE: Number of bytes written=22734115
  26. FILE: Number of read operations=0
  27. FILE: Number of large read operations=0
  28. FILE: Number of write operations=0
  29. HDFS: Number of bytes read=3791
  30. HDFS: Number of bytes written=0
  31. HDFS: Number of read operations=12
  32. HDFS: Number of large read operations=0
  33. HDFS: Number of write operations=0
  34. Map-Reduce Framework
  35. Map input records=110
  36. Map output records=110
  37. Input split bytes=151
  38. Spilled Records=0
  39. Failed Shuffles=0
  40. Merged Map outputs=0
  41. GC time elapsed (ms)=0
  42. Total committed heap usage (bytes)=226492416
  43. File Input Format Counters
  44. Bytes Read=0
  45. File Output Format Counters
  46. Bytes Written=0
  47. 16/11/13 19:21:44 INFO mapreduce.ExportJobBase: Transferred 3.7021 KB in 2.3262 seconds (1.5915 KB/sec)
  48. 16/11/13 19:21:44 INFO mapreduce.ExportJobBase: Exported 110 records.

导出完毕之后,我们可以在mysql中通过order_info表进行查询:

 
  
  1. mysql> select * from order_info limit 5;
  2. +-----------------+------------+----------+
  3. | id | order_time | business |
  4. +-----------------+------------+----------+
  5. | 358574046793404 | 2016-04-05 | flight |
  6. | 358574046794733 | 2016-08-03 | hotel |
  7. | 358574050631177 | 2016-05-08 | vacation |
  8. | 358574050634213 | 2015-04-28 | train |
  9. | 358574050634692 | 2016-04-05 | tuan |
  10. +-----------------+------------+----------+
  11. 5 rows in set (0.00 sec)



4. 导出过程

其实在了解了导入过程后,导出过程就变的更容易理解了,如下图所示:


同样,Sqoop根据目标表(数据库)的结构会生成一个Java类(第一步和第二步),该类的作用为序列化和反序列化。接着会启动一个MapReduce作业(第三步),在作业中会用生成的Java类从HDFS中读取数据(第四步),并生成一批INSERT语句,每条语句对会向mysql的目标表插入多条数据(第五步),这样读入的时候是并行的,写入的时候也是并行的,但是其写入性能会受限于目标数据库的写入性能。


来自于:《Hadoop海量数据处理  技术详解与项目实战》 


相关实践学习
基于CentOS快速搭建LAMP环境
本教程介绍如何搭建LAMP环境,其中LAMP分别代表Linux、Apache、MySQL和PHP。
全面了解阿里云能为你做什么
阿里云在全球各地部署高效节能的绿色数据中心,利用清洁计算为万物互联的新世界提供源源不断的能源动力,目前开服的区域包括中国(华北、华东、华南、香港)、新加坡、美国(美东、美西)、欧洲、中东、澳大利亚、日本。目前阿里云的产品涵盖弹性计算、数据库、存储与CDN、分析与搜索、云通信、网络、管理与监控、应用服务、互联网中间件、移动服务、视频服务等。通过本课程,来了解阿里云能够为你的业务带来哪些帮助 &nbsp; &nbsp; 相关的阿里云产品:云服务器ECS 云服务器 ECS(Elastic Compute Service)是一种弹性可伸缩的计算服务,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。产品详情: https://www.aliyun.com/product/ecs
目录
相关文章
|
3月前
|
分布式计算 关系型数据库 Hadoop
使用Sqoop将数据从Hadoop导出到关系型数据库
使用Sqoop将数据从Hadoop导出到关系型数据库
|
SQL 分布式计算 Hadoop
使用Sqoop导出Mysql数据到Hive(实战案例)
使用Sqoop导出Mysql数据到Hive(实战案例)
440 0
使用Sqoop导出Mysql数据到Hive(实战案例)
|
关系型数据库 MySQL Java
Sqoop数据导入/导出
1. 从HDFS导出到RDBMS数据库 1.1 准备工作 写一个文件 sqoop_export.txt 1201,laojiao, manager,50000, TP 1202,fantj,preader,50000,TP 1203,jiao,dev...
2578 0
|
关系型数据库 Java 数据库连接
sqoop导出到hdfs
./sqoop export --connect jdbc:mysql://192.168.58.180/db --username root --password 123456  --export-dir '/path' --table t_detail -m   用单引号  不保留原有的变量意思     否则如果原来有个变量叫path  那么就会引用path所知带的意思。
1190 0
|
1月前
|
SQL 关系型数据库 MySQL
Sqoop【付诸实践 01】Sqoop1最新版 MySQL与HDFS\Hive\HBase 核心导入导出案例分享+多个WRAN及Exception问题处理(一篇即可学会在日常工作中使用Sqoop)
【2月更文挑战第9天】Sqoop【付诸实践 01】Sqoop1最新版 MySQL与HDFS\Hive\HBase 核心导入导出案例分享+多个WRAN及Exception问题处理(一篇即可学会在日常工作中使用Sqoop)
88 7
|
1月前
|
分布式计算 关系型数据库 MySQL
Sqoop【部署 01】CentOS Linux release 7.5 安装配置 sqoop-1.4.7 解决警告并验证(附Sqoop1+Sqoop2最新版安装包+MySQL驱动包资源)
【2月更文挑战第8天】Sqoop CentOS Linux release 7.5 安装配置 sqoop-1.4.7 解决警告并验证(附Sqoop1+Sqoop2最新版安装包+MySQL驱动包资源)
92 1
|
7月前
|
关系型数据库 MySQL 大数据
大数据Sqoop将mysql直接抽取至Hbase
大数据Sqoop将mysql直接抽取至Hbase
76 0
|
7月前
|
SQL 分布式计算 分布式数据库
大数据Sqoop借助Hive将Mysql数据导入至Hbase
大数据Sqoop借助Hive将Mysql数据导入至Hbase
153 0
|
10月前
|
SQL 分布式计算 关系型数据库
大数据 | (五)通过Sqoop实现从MySQL导入数据到HDFS
大数据 | (五)通过Sqoop实现从MySQL导入数据到HDFS
157 0
|
SQL 分布式计算 运维
【大数据开发运维解决方案】Sqoop增量同步mysql/oracle数据到hive(merge-key/append)测试文档
上一篇文章介绍了sqoop全量同步数据到hive, 本片文章将通过实验详细介绍如何增量同步数据到hive,以及sqoop job与crontab定时结合无密码登录的增量同步实现方法。
【大数据开发运维解决方案】Sqoop增量同步mysql/oracle数据到hive(merge-key/append)测试文档

热门文章

最新文章