Are you preparing for Sqoop interview? If yes, then we’ve a solution to win your ideal job. Sqoop is an application using command line interface for transferring data between Hadoop and databases and its functions are to store and retrieve data. Looking for a job can be cumbersome and tiring, especially when you are not aware of how to apply and where to search and how to prepare well for Sqoop job interviews. To get rid of this dilemma Wisdomjobs framed Sqoop job interview questions and answers to make is easier for your interview preparation. If you are expertise in SQL language, MySQL, Hadoop, Oracle, HDFS and database concepts, then multiple job opportunities are available for your reference.
Question 1. What Is The Role Of Jdbc Driver In A Sqoop Set Up?
Answer :
To connect to different relational databases sqoop needs a connector. Almost every DB vendor makes this connecter available as a JDBC driver which is specific to that DB. So Sqoop needs the JDBC driver of each of the database it needs to inetract with.
Question 2. Is Jdbc Driver Enough To Connect Sqoop To The Databases?
Answer :
No. Sqoop needs both JDBC and connector to connect to a database.
Question 3. When To Use Target-dir And When To Use Warehouse-dir While Importing Data?
Answer :
To specify a particular directory in HDFS use --target-dir but to specify the parent directory of all the sqoop jobs use warehouse-dir. In this case under the parent directory sqoop will cerate a directory with the same name as th e table.
Question 4. How Can You Import Only A Subset Of Rows Form A Table?
Answer :
By using the WHERE clause in the sqoop import statement we can import only a subset of rows.
Question 5. How Can We Import A Subset Of Rows From A Table Without Using The Where Clause?
Answer :
We can run a filtering query on the database and save the result to a temporary table in database.Then use the sqoop import command without using the where clause
Answer :
The password-file option can be used inside a sqoop script while the -P option reads from standard input , preventing automation.
Answer :
.gz
Question 8. What Is The Significance Of Using Compress-codec Parameter?
Answer :
To get the out file of a sqoop import in formats other than .gz like .bz2 we use the compress -code parameter.
Question 9. What Is A Disadvantage Of Using Direct Parameter For Faster Data Load By Sqoop?
Answer :
The native utilities used by databases to support faster laod do not work for binary data formats like SequenceFile
Question 10. How Can You Control The Number Of Mappers Used By The Sqoop Command?
Answer :
The Parameter num-mapers is used to control the number of mappers executed by a sqoop command. We should start with choosing a small number of map tasks and then gradually scale up as choosing high number of mappers initially may slow down the performance on the database side.
Answer :
Using the command
sqoop import-all-tables
This will import all the tables except the ones mentioned in the exclude-tables clause.
Answer :
sqoop can have 2 approaches.
Question 13. What Is The Usefulness Of The Options File In Sqoop?
Answer :
The options file is used in sqoop to specify the command line values in a file and use it in the sqoop commands.
For example the --connect parameter's value and --user name value scan be stored in a file and used again and again with different sqoop commands.
Question 14. Is It Possible To Add A Parameter While Running A Saved Job?
Answer :
Yes, we can add an argument to a saved job at runtime by using the --exec option
sqoop job --exec jobname -- -- newparameter
Answer :
Using the --split-by parameter we specify the column name based on which sqoop will divide the data to be imported into multiple chunks to be run in parallel.
Answer :
By using the --mapreduce-job-name parameter. Below is a example of the command.
sqoop import
--connect jdbc:mysql://mysql.example.com/sqoop
--username sqoop
--password sqoop
--query 'SELECT normcities.id,
countries.country,
normcities.city
FROM normcities
JOIN countries USING(country_id)
WHERE $CONDITIONS'
--split-by id
--target-dir cities
--mapreduce-job-name normcities
Answer :
We can use the --boundary –query parameter in which we specify the min and max value for the column based on which the split can happen into multiple mapreduce tasks. This makes it faster as the query inside the –boundary-query parameter is executed first and the job is ready with the information on how many mapreduce tasks to create before executing the main query.
Question 18. What Is The Difference Between The Parameters?
Answer :
sqoop.export.records.per.statement and sqoop.export.statements.per.transaction
The parameter “sqoop.export.records.per.statement” specifies the number of records that will be used in each insert statement.
But the parameter “sqoop.export.statements.per.transaction” specifies how many insert statements can be processed parallel during a transaction.
Question 19. How Will You Implement All-or-nothing Load Using Sqoop?
Answer :
Using the staging-table option we first load the data into a staging table and then load it to the final target table only if the staging load is successful.
Question 20. How Do You Clear The Data In A Staging Table Before Loading It By Sqoop?
Answer :
By specifying the –clear-staging-table option we can clear the staging table before it is loaded. This can be done again and again till we get proper data in staging.
Question 21. How Will You Update The Rows That Are Already Exported?
Answer :
The parameter --update-key can be used to update existing rows. In it a comma-separated list of columns is used which uniquely identifies a row. All of these columns is used in the WHERE clause of the generated UPDATE query. All other table columns will be used in the SET part of the query.
Question 22. How Can You Sync A Exported Table With Hdfs Data In Which Some Rows Are Deleted?
Answer :
Truncate the target table and load it again.
Question 23. How Can You Export Only A Subset Of Columns To A Relational Table Using Sqoop?
Answer :
By using the –column parameter in which we mention the required column names as a comma separated list of values.
Answer :
By using the –input-null-string parameter we can specify a default value and that will allow the row to be inserted into the target table.
Question 25. How Can You Schedule A Sqoop Job Using Oozie?
Answer :
Oozie has in-built sqoop actions inside which we can mention the sqoop commands to be executed.
Answer :
Some of the imported records might have null values in all the columns. As Hbase does not allow all null values in a row, those rows get dropped.
Question 27. Give A Sqoop Command To Show All The Databases In A Mysql Server.?
Answer :
$ sqoop list-databases --connect jdbc:mysql://database.example.com/
Question 28. What Do You Mean By Free Form Import In Sqoop?
Answer :
Sqoop can import data form a relational database using any SQL query rather than only using table and column name parameters.
Answer :
By using the –m 1 clause in the import command, sqoop cerates only one mapreduce task which will import the rows sequentially.
Answer :
The Mapreduce cluster is configured to run 4 parallel tasks. So the sqoop command must have number of parallel tasks less or equal to that of the MapReduce cluster.
Question 31. What Is The Importance Of --split-by Clause In Running Parallel Import Tasks In Sqoop?
Answer :
The –split-by clause mentions the column name based on whose value the data will be divided into groups of records. These group of records will be read in parallel by the mapreduce tasks.
Question 32. What Does This Sqoop Command Achieve?
Answer :
$ sqoop import --connnect <connect-str> --table foo --target-dir /dest
It imports data from a database to a HDFS file named foo located in the directory /dest
Answer :
Using the --append argument, Sqoop will import data to a temporary directory and then rename the files into the normal target directory in a manner that does not conflict with existing filenames in that directory.
Question 34. How Can You Control The Mapping Between Sql Data Types And Java Types?
Answer :
By using the --map-column-java property we can configure the mapping between.
Below is an example : $ sqoop import ... --map-column-java id = String, value = Integer
Answer :
By using the lastmodified mode. Rows where the check column holds a timestamp more recent than the timestamp specified with --last-value are imported.
Question 36. What Are The Two File Formats Supported By Sqoop For Import?
Answer :
Delimited text and Sequence Files.
Answer :
$ sqoop import --connect jdbc:mysql://host/dbname --table EMPLOYEES
--columns "employee_id,first_name,last_name"
Question 38. Give A Sqoop Command To Run Only 8 Mapreduce Tasks In Parallel?
Answer :
$ sqoop import --connect jdbc:mysql://host/dbname --table table_name
-m 8
Answer :
It imports the employees who have joined after 31-Mar-2017.
Answer :
$ sqoop import --connect jdbc:mysql://db.foo.com/corp --table EMPLOYEES
--split-by dept_id
Answer :
It performs an incremental import of new data, after having already imported the first 100,0rows of a table
Question 42. Give A Sqoop Command To Import Data From All Tables In The Mysql Db Db1.?
Answer :
sqoop import-all-tables --connect jdbc:mysql://host/DB1
Answer :
$ sqoop export --connect jdbc:mysql://host/DB1 --call proc1
--export-dir /Dir1
Question 44. What Is A Sqoop Metastore?
Answer :
It is a tool using which Sqoop hosts a shared metadata repository. Multiple users and/or remote users can define and execute saved jobs (created with sqoop job) defined in this metastore.
Clients must be configured to connect to the metastore in sqoop-site.xml or with the --meta-connect argument.
Question 45. What Is The Purpose Of Sqoop-merge?
Answer :
The merge tool combines two datasets where entries in one dataset should overwrite entries of an older dataset preserving only the newest version of the records between both the data sets.
Question 46. How Can You See The List Of Stored Jobs In Sqoop Metastore?
Answer :
sqoop job –list
Question 47. Give The Sqoop Command To See The Content Of The Job Named Myjob?
Answer :
Sqoop job –show myjob
Question 48. Which Database The Sqoop Metastore Runs On?
Answer :
Running sqoop-metastore launches a shared HSQLDB database instance on the current machine.
Question 49. Where Can The Metastore Database Be Hosted?
Answer :
The metastore database can be hosted anywhere within or outside of the Hadoop cluster..
Sqoop Related Tutorials |
|
---|---|
J2EE Tutorial | Data Warehousing Tutorial |
Hadoop Tutorial | Java Tutorial |
Sqoop Tutorial | Scala Tutorial |
HBase Tutorial |
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.