当前位置: 动力学知识库 > 问答 > 编程问答 >

oracle - How to find optimal number of mappers when running Sqoop import and export?

问题描述:

I'm using Sqoop version 1.4.2 and Oracle database.

When running Sqoop command. For example like this:

./sqoop import \

--fs <name node> \

--jt <job tracker> \

--connect <JDBC string> \

--username <user> --password <password> \

--table <table> --split-by <cool column> \

--target-dir <where> \

--verbose --m 2

We can specify --m - how many parallel tasks do we want Sqoop to run (also they might be accessing Database at same time).

Same option is available for ./sqoop export <...>

Is there some heuristic (probably based on size of data) which will help to guess what is optimal number of task to use?

Thank you!

网友答案:

This is taken from Apache Sqoop Cookbook by O'Reilly Media, and seems to be the most logical answer.

The optimal number of mappers depends on many variables: you need to take into account your database type, the hardware that is used for your database server, and the impact to other requests that your database needs to serve. There is no optimal number of mappers that works for all scenarios. Instead, you’re encouraged to experiment to find the optimal degree of parallelism for your environment and use case. It’s a good idea to start with a small number of mappers, slowly ramping up, rather than to start with a large number of mappers, working your way down.

网友答案:

In "Hadoop: The Definitive Guide," they explain that when setting up your maximum map/reduce task on each Tasktracker consider the processor and its cores to define the number of tasks for your cluster, so I would apply the same logic to this and take a look at how many processes you can run on your processor(s) (Counting HyperTreading, Cores) and set your --m to this value - 1 (leave one open for other tasks that may pop up during the export) BUT this is only if you have a large dataset and want to get the export done in a timely manner.

If you don't have a large dataset, then remember that your output will be the value of --m number of files, so if you are exporting a 100 row table, you may want to set --m to 1 to keep all the data localized in one file.

分享给朋友:
您可能感兴趣的文章:
随机阅读: