Note: Recompile with -Xlint:deprecation for details.ġ6/05/26 05:56:10 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/64f04ad998cebf113bf8ec1efdbf6b95/test.jarġ6/05/26 05:56:10 WARN manager.MySQLManager: It looks like you are importing from mysql.ġ6/05/26 05:56:10 WARN manager.MySQLManager: This transfer can be faster! Use the -directġ6/05/26 05:56:10 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.ġ6/05/26 05:56:10 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)ġ6/05/26 05:56:10 INFO mapreduce.ImportJobBase: Beginning import of testġ6/05/26 05:56:11 INFO hcat.SqoopHCatUtilities: Configuring HCatalog for import jobġ6/05/26 05:56:11 INFO hcat.SqoopHCatUtilities: Configuring HCatalog specific details for jobġ6/05/26 05:56:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1ġ6/05/26 05:56:11 INFO hcat.SqoopHCatUtilities: Database column names projected : ġ6/05/26 05:56:11 INFO hcat.SqoopHCatUtilities: Database column name - info map : Note: /tmp/sqoop-hdfs/compile/64f04ad998cebf113bf8ec1efdbf6b95/test.java uses or overrides a deprecated API. Consider using -P instead.ġ6/05/26 05:56:04 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.ġ6/05/26 05:56:04 INFO tool.CodeGenTool: Beginning code generationġ6/05/26 05:56:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test` AS t LIMIT 1ġ6/05/26 05:56:05 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.4.2.0-258/hadoop-mapreduce Please set $ACCUMULO_HOME to the root of your Accumulo installation.ġ6/05/26 05:56:03 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.4.2.0-258ġ6/05/26 05:56:03 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Warning: /usr/hdp/2.4.2.0-258/accumulo does not exist! Accumulo imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /usr/hdp/2.4.2.0-258/hbase does not exist! HBase imports will fail. Hive> ~]$ sqoop import -connect jdbc:mysql:///test -username test -password hadoop -table test -hcatalog-database default -hcatalog-table test_orc_sqoop -hcatalog-storage-stanza "stored as orcfile" -m 1 hive> create table test_orc_sqoop (name varchar(20)) Had some time, so tried Mutyala's suggestion and it works :). Spann Here is a quick and dirty way to do it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |