informatica big data management interview questions

Spread the love

informatica big data management interview questions
 
1. Define what is Hadoop framework?
Hadoop is an open source framework which is written in Java by apache software foundation. This framework is used to write software application which requires to process a vast amount of data (It could handle multi-terabytes of data). It works in-parallel on large clusters which could have 1000 of computers (Nodes) on the clusters. It also processes data very reliably and fault-tolerant manner. See the below image how does it look.
2. On Define what concept the Hadoop framework works?
It works on MapReduce, and it is devised by Google.
3. Define what is MapReduce?
Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce.
The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)
MapTask: will process these chunks in a completely parallel manner (One node can process one or more chunks).
The framework sorts the outputs of the maps.
Reduce Task: And the above output will be the input for the reduce tasks, produces the final result.
Your business logic would be written in the mapped task and ReducedTask.
Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
4. Define what is compute and Storage nodes?
Compute Node: This is the computer or machine where your actual business logic will be executed.
Storage Node: This is the computer or machine where your file system resides to store the processing data.
In most of the cases, the compute node and storage node would be the same machine.
5. How does master-slave architecture in the Hadoop?
The MapReduce framework consists of a single master JobTracker and multiple slaves, each cluster-node will have one TaskskTracker.
The master is responsible for scheduling the jobs’ component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.
6. How does a Hadoop application look like or their basic components?
Minimally a Hadoop application would have the following components.
Input location of data
Output location of processed data.
A map task.
A reduced task.
Job configuration
The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.
7. Explain how the input and output data format of the Hadoop framework?
The MapReduce framework operates exclusively on pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types. See the flow mentioned below
(input) -> map -> -> combine/sorting -> -> reduce -> (output)
8. Define what are the restriction to the key and value class?
The key and value classes have to be serialized by the framework. To make them serializable Hadoop provides a Writable interface. As you know from the java itself that the key of the Map should be comparable, hence the key has to implement one more interface WritableComparable.
9. Explain the WordCount implementation via the Hadoop framework?
We will count the words in all the input file flow as below
input
Assume there are two files each having a sentence
Hello World Hello World (In file 1)
Hello World Hello World (In file 2)
Mapper: There would be each mapper for the file
For the given sample input the first map output:
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>
The second map output:
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>
Combiner/Sorting (This is done for each individual map)
So output looks like this
The output of the first map:
< Hello, 2>
< World, 2>
The output of the second map:
< Hello, 2>
< World, 2>
Reducer :
It sums up the above output and generates the output as below
< Hello, 4>
< World, 4>
Output
The final output would look like
Hello 4 times
World 4 times
10. Which interface needs to be implemented to create Mapper and Reducer for the Hadoop?
org.apache.hadoop.mapreduce.Mapper
org.apache.hadoop.mapreduce.Reducer
11. Define what Mapper does?
Maps are the individual tasks that transform input
records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs.
12. Define what is the InputSplit in map reduce software?
An InputSplit is a logical representation of a unit (A chunk) of input work for a map task; e.g., a filename and a byte range within that file to process or a row set in a text file.
13. Define what is the InputFormat?
The InputFormat is responsible for enumerate (itemize) the InputSplits, and producing a RecordReader which will turn those logical work units into actual physical input records.
14. Where do you specify the Mapper Implementation?
Generally, mapper implementation is specified in the Job itself.
15. How Mapper is instantiated in a running job?
The Mapper itself is instantiated in the running job and will be passed a MapContext object which it can use to configure itself.
16. Which are the methods in the Mapper interface?
The Mapper contains the run() method, which calls its own setup() method only once, it also calls a map() method for each input and finally calls it cleanup() method. All the above methods you can override in your code.
17. Define what happens if you don’t override the Mapper methods and keep them as it is?
If you do not override any methods (leaving even map as-is), it will act as the identity function, emitting each input record as a separate output.
18. Define what is the use of Context object?
The Context object allows the mapper to interact with the rest of the Hadoop system. It
Includes configuration data for the job, as well as interfaces which allow it to emit output.
19. How can you add the arbitrary key-value pairs in your mapper?
You can set arbitrary (key, value) pairs of configuration data in your Job, e.g. with Job.getConfiguration().set(“myKey”, “myVal”), and then retrieve this data in your mapper with Context.getConfiguration().get(“myKey”). This kind of functionality is typically done in the Mapper’s setup() method.
20. How does Mapper’s run() method works?
The Mapper.run() method then calls map(KeyInType, ValInType, Context) for each key/value pair in the InputSplit for that task
21. Which object can be used to get the progress of a particular job?
Context
22. Define what is next step after Mapper or MapTask?
The output of the Mapper are sorted and Partitions will be created for the output. A number of the partition depends on the number of reducers.
23. Name the most common InputFormats defined in Hadoop? Which one is a default?
Following 3 are most common InputFormats defined in Hadoop
TextInputFormat
KeyValueInputFormat
SequenceFileInputFormat
TextInputFormat is the hadoop default.
24.Define what is the difference between TextInputFormat and KeyValueInputFormat class?
TextInputFormat: It reads lines of text files and provides the offset of the line as a key to the Mapper and actual line as Value to the mapper
KeyValueInputFormat: Reads text file and parses lines into a key, value pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.
25. Define what is InputSplit in Hadoop?
When a Hadoop job is run, it splits input files into chunks and assigns each split to a mapper to process. This is called Input Split
26. How is the splitting of file invoked in Hadoop Framework?
It is invoked by the Hadoop framework by running getInputSplit() method of the Input format class (like FileInputFormat) defined by the user
Consider case scenario: In the M/R system,
HDFS block size is 64 MB
Input format is FileInputFormat
We have 3 files of size 64K, 65Mb and 127Mb
then how many inputs splits will be made by Hadoop framework?
Hadoop will make 5 splits as follows
1 split for 64K files
2 splits for 65Mb files
2 splits for a 127Mb file
27. Define what is the purpose of RecordReader in Hadoop?
The InputSplit has defined a slice of work but does not describe how to access it. The RecordReader class actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat
28. After the Map phase finishes, the Hadoop framework does “Partitioning, Shuffle, and sort”. Explain Define what happens in this phase?
Partitioning: Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same
Shuffle: After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.
Sort: Each reduces task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer
29. Define what is a Combiner?
The Combiner is a “mini-reduce” process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.
30. Define what is job tracker?
Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

informatica big data management training


 
Informatica big data management training, Informatica big data management tutorial, Informatica big data management device manager, Informatica big data management best practices, Informatica big data management online training, Informatica big data management security training, Informatica big data management jobs in hyderabad, Informatica big data management training in hyderabad, Informatica big data management jobs in chennai, Informatica big data management openings in pune, Informatica big data management certification, Informatica big data management course content, Informatica big data management online training from india, Informatica big data management developer jobs in india, Informatica big data management administration training, Informatica big data management training in bangalore, Informatica big data management training online.

Scroll to top