1. $ bin/sqoop help create-hive-table
  2. usage: sqoop create-hive-table [GENERIC-ARGS] [TOOL-ARGS]
  3. Common arguments:
  4. --connect <jdbc-uri> Specify JDBC connect
  5. string
  6. --connection-manager <class-name> Specify connection manager
  7. class name
  8. --connection-param-file <properties-file> Specify connection
  9. parameters file
  10. --driver <class-name> Manually specify JDBC
  11. driver class to use
  12. --hadoop-home <hdir> Override
  13. $HADOOP_MAPRED_HOME_ARG
  14. --hadoop-mapred-home <dir> Override
  15. $HADOOP_MAPRED_HOME_ARG
  16. --help Print usage instructions
  17. -P Read password from console
  18. --password <password> Set authentication
  19. password
  20. --password-alias <password-alias> Credential provider
  21. password alias
  22. --password-file <password-file> Set authentication
  23. password file path
  24. --relaxed-isolation Use read-uncommitted
  25. isolation for imports
  26. --skip-dist-cache Skip copying jars to
  27. distributed cache
  28. --username <username> Set authentication
  29. username
  30. --verbose Print more information
  31. while working
  32. Hive arguments:
  33. --create-hive-table Fail if the target hive
  34. table exists
  35. --hive-database <database-name> Sets the database name to
  36. use when importing to hive
  37. --hive-delims-replacement <arg> Replace Hive record \0x01
  38. and row delimiters (\n\r)
  39. from imported string fields
  40. with user-defined string
  41. --hive-drop-import-delims Drop Hive record \0x01 and
  42. row delimiters (\n\r) from
  43. imported string fields
  44. --hive-home <dir> Override $HIVE_HOME
  45. --hive-overwrite Overwrite existing data in
  46. the Hive table
  47. --hive-partition-key <partition-key> Sets the partition key to
  48. use when importing to hive
  49. --hive-partition-value <partition-value> Sets the partition value to
  50. use when importing to hive
  51. --hive-table <table-name> Sets the table name to use
  52. when importing to hive
  53. --map-column-hive <arg> Override mapping for
  54. specific column to hive
  55. types.
  56. --table <table-name> The db table to read the
  57. definition from
  58. Output line formatting arguments:
  59. --enclosed-by <char> Sets a required field enclosing
  60. character
  61. --escaped-by <char> Sets the escape character
  62. --fields-terminated-by <char> Sets the field separator character
  63. --lines-terminated-by <char> Sets the end-of-line character
  64. --mysql-delimiters Uses MySQL's default delimiter set:
  65. fields: , lines: \n escaped-by: \
  66. optionally-enclosed-by: '
  67. --optionally-enclosed-by <char> Sets a field enclosing character
  68. Generic Hadoop command-line arguments:
  69. (must preceed any tool-specific arguments)
  70. Generic options supported are
  71. -conf <configuration file> specify an application configuration file
  72. -D <property=value> use value for given property
  73. -fs <local|namenode:port> specify a namenode
  74. -jt <local|jobtracker:port> specify a job tracker
  75. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  76. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  77. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  78. The general command line syntax is
  79. bin/hadoop command [genericOptions] [commandOptions]
  80. At minimum, you must specify --connect and --table

复制表结构

  1. $ bin/sqoop create-hive-table \
  2. --connect jdbc:oracle:thin:@//192.168.1.38:1521/CMASPROD \
  3. --username mes_bc \
  4. --password-file 38oracle.pwd \
  5. --mysql-delimiters \
  6. --table H_LOG_LOTBASE