These Ab Initio Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of Ab Initio . As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer.we are going to cover top 100 Ab Initio Interview questions along with their detailed answers. We will be covering Ab Initio scenario based interview questions, Ab Initio interview questions for freshers as well as Ab Initio interview questions and answers for experienced.
Eme is said as enterprise meta data enc, gde as graphical development env and co-operating system can be said as abinitio server relation b/w this co-op, eme and gde is as fallows co operating system is the initiation server. This co-op is installed on particular o.s platform that is called native o.s .coming to the eme, its just as repository in Informatics, its hold the metadata, transformations, db config files source and targets information’s. Coming to gde its is end user environment where we can develop the graphs (mapping just like in Informatics) designer uses the gde and designs the graphs and save to the eme or sand box it is at user side. Where eme is at server side.
Well, processing of data derives a very large number of benefits. Users can put separate many factors that matters to them. In addition to this, with the help of this approach, one can easily keep up the pace simply by deriving data into different structures from a totally unstructured format. In addition to this, processing is useful in eliminating various bugs that are often associated with the data and cause problems at a later section. It is because of no other reason than this, data processing has wide application in a number of tasks.
Processing is basically a procedure that simply covert the data from a useless form into a useful one without making a lot of efforts. However, the same may vary depending on factors such as the size of data and its format. A sequence of operations is generally carried out to perform this task and depending on the type of data, this sequence could be automatic or manual. Because in the present scenario, most of the devices that perform this task are PC’s automatic approach is more popular than ever before. Users are free to obtain data in forms such as a table, vectors, s, graphs, charts and so on. This is the best things that business owners can simply enjoy.
Once the data is collected, the next important task is to enter it in the concerned machine or system. Well, gone are those days when storage depends on papers. In the present time, data size is very large and it needs to be performed in a reliable manner. The digital approach is a god option for this as it simply let users perform this task easily and in fact without compromising with anything. A large set of operations then need to be performed for the meaningful analysis. In many cases, conversion also largely matters and users are always free to consider the outcomes which best meet their expectations.
Data often needs to be processed continuously and it is used at the same time. It is known as data processing cycle. The same provide results which are quick or may take extra time depending on the type, size and nature of data. This is boosting the complexity in this approach and thus there is a need of methods that are reliable and advanced than existing approaches. The data cycle simply make sure that complexity can be avoided upto the possible extent and without doing much.
The biggest ability that one could have in this domain is the ability to rely on the data or the information. Of course, communication matters a lot in accomplishing several important tasks such as representation of the information. There are many departments in an organization and communication make sure things are good and reliable for everyone.
The first thing that largely matters is defining the objective of the task and then engages the team in it. This provides a solid direction for the accomplishment of the task. This is important when one is working on a set of data which is completely unique or fresh. After this, next big thing that needs attention is effective data modeling. This includes finding the missing values and data validation. Last thing is to track the results.
Scientific data processing simply means data with great amount of computation i.e. arithmetic operations. In this, a limited amount of data is provided as input and a bulk data is there at the outcome. On the other hand commercial data processing is different. In this, the outcome is limited as compare to the input data. The computational operations are limited in commercial data processing.
- Informatica vs ab initio. …
- What is the relation between eme, gde and co-operating system? …
- What are the benefits of data processing according to you? …
- What exactly do you understand with the term data processing and businesses can trust this approach?
AbInitio Interview Questions and Answers | BI |ETL |
abinitio interview questions
Searching for Abinitio job? The AbInitio software is designed as a Business Intelligence platform which includes data processing applications. Many organizations are using it for data processing and analysis to help to help corporate executives, business managers and other end users. Abinitio jobs are available across cities including Bangalore, Pune, Chennai and Hyderabad. Abintio job role include handling technical and functional environments in association with BA, QA & other support team with the help of agile methodologies and deliver analysis, Analyzes, Designs. Wisdomjobs interview questions site is designed for job seekers to help them winning job interviews. Abinitio job interview questions and asnwers are useful for testing professionals to attend job interviews and get shortlisted for testing job position.
Get Practical Oriented Ab Initio Training to UPGRADE Your Skill Set
40) Differences Between Ab-Initio and Informatica?
Ans:
Informatics and Ab-Initio both support parallelism. But Informatics supports only one type of parallelism but the Ab-Initio supports three types of parallelisms.
We don’t have scheduler in Ab-Initio like Informatics , you need to schedule through script or you need to run manually.
Ab-Initio supports different types of text files means you can read same file with different structures that is not possible in Informatics, and also Ab-Initio is more user friendly than Informatics .
Informatics is an engine based ETL tool, the power this tool is in it’s transformation engine and the code that it generates after development cannot be seen or modified.
Ab-Initio is a code based ETL tool, it generates ksh or bat etc. code, which can be modified to achieve the goals, if any that can not be taken care through the ETL tool itself.
Initial ramp up time with Ab-Initio is quick compare to Informatics, when it comes to standardization and tuning probably both fall into same bucket.
Ab-Initio doesn’t need a dedicated administrator, UNIX or NT admin will suffice, where as Informatics need a dedicated administrator.
With Ab-Initio you can read data with multiple delimiter in a given record, where as Informatics force you to have all the fields be delimited by one standard delimiter
Error Handling – In Ab-Initio you can attach error and reject files to each transformation and capture and analyze the message and data separately. Informatics has one huge log! Very inefficient when working on a large process, with numerous points of failure.
41) What is the difference between rollup and scan?
Ans:
By using rollup we cant generate cumulative summary records for that we will be using scan.
42) Why we go for Ab-Initio?
Ans:
43) What is the difference between partitioning with key and round robin?
Ans:
PARTITION BY KEY: In this, we have to specify the key based on which the partition will occur. Since it is key based it results in very well balanced data. It is useful for key dependent parallelism.
PARTITION BY ROUND ROBIN: In this, the records are partitioned in sequential way, distributing data evenly in block size chunks across the output partition. It is not key based and results in well balanced data especially with block size of 1. It is useful for record independent parallelism.
44) How to Create Surrogate Key using Ab Initio?
Ans:
A key is a field or set of fields that uniquely identifies a record in a file or table.
A natural key is a key that is meaningful in some business or real-world sense. For example, a social security number for a person, or a serial number for a piece of equipment, is a natural key.
A surrogate key is a field that is added to a record, either to replace the natural key or in addition to it, and has no business meaning. Surrogate keys are frequently added to records when populating a data warehouse, to help isolate the records in the warehouse from changes to the natural keys by outside processes.
45) What are the most commonly used components in a Ab-Initio graphs?
Ans:
46) How do we handle if DML changing dynamically?
Ans:
There are lot many ways to handle the DMLs which changes dynamically with in a single file.
Some of the suitable methods are to use a conditional DML or to call the vector functionality while calling the DMLs.
47) What is meant by limit and ramp in Ab-Initio? Which situation it’s using?
Ans:
The limit and ramp are the variables that are used to set the reject tolerance for a particular graph. This is one of the option for reject-threshold properties. The limit and ramp values should pass if enables this option.
Graph stops the execution when the number of rejected records exceeds the following formula.
The default value will be set to 0.0.
The limit parameter contains an integer that represents a number of reject events The ramp parameter contains a real number that represents a rate of reject events in the number of records processed.
Typical Limit and Ramp settings:
48) What are data mapping and data modeling?
Ans:
Data mapping deals with the transformation of the extracted data at FIELD level i.e. the transformation of the source field to target field is specified by the mapping defined on the target field. The data mapping is specified during the cleansing of the data to be loaded.
For Example:
Then we can have a mapping like:
49) Can you explain the performance and scalability of Co> Operating System?
Ans:
The Co>Operating System was designed from the ground up to achieve maximum performance and scalability. Every aspect of the Co>Operating System has been optimized to get maximum performance from your hardware. And you don’t need “cloud” technology because the Co>Operating System naturally distributes processing across farms of servers.
50) Can you explain the Co>Operating System’s processing model?
Ans:
The Co>Operating System is a distributed peer-to-peer processing system. It must be installed on all the servers that will be part of running an application. Each of these servers may be running a different operating system (Unix, Linux and Linux, Windows, or z/OS).
51) How Co>Operating System Integrates with legacy codes?
Ans:
Ab Initio enables to build end-to-end applications completely with the Graphical Development Environment, and run those applications completely within the Co>Operating System, users often have existing applications or 3rd party products that run fine and that are not worth re-implementing.
Ab Initio makes it easy to reuse those existing applications, whether they were coded in C, C++, Cobol, Java, shell scripts, or whatever. In fact, the Co>Operating System makes it possible to integrate those applications into environments they were not originally designed for.
Legacy codes are integrated into Ab Initio applications by turning them into components that behave just like all other Ab Initio components.
52) What is Ab Initio Enterprise Meta>Environment (EME)?
Ans:
Ab Initio Enterprise Meta>Environment (EME), a centralized repository, and reused both within and across applications.
53) What you can store, manage and reuse centrally in Ab Initio Enterprise Meta>Environment (EME)?
Ans:
Here are the elements of what we can centrally store, manage, and reuse:
54) What metadata importer can do in Ab Initio?
Ans:
The Metadata Importer can load external metadata such as:
55) What is the Ab Initio Business Rules Environment (BRE)?
Ans:
The Ab Initio® Business Rules Environment (BRE) allows business analysts to specify business rules in a form that is very familiar and comfortable to them: grid-like spreadsheets.
In the BRE, the rules are specified in business terms, not technical terms, and with expressions that are clear to anyone who has worked with Microsoft Excel. As a consequence, not only can rules be specified quickly and accurately, they are also easily understood by other business people.
56) What are “business rules” in Ab Initio Business Rules Environment (BRE)?
Ans:
The BRE supports three different styles of business rules: decision rules, validation rules, and mapping rules. While they are fundamentally similar, business users are comfortable thinking of rules as belonging in one of these categories.
57) How does the BRE work with the Co>Operating System?
Ans:
It’s straightforward: the BRE takes the rules created by the user and puts them into a component in a graph run by the Co>Operating System.
Subscribe to our youtube channel to get new updates..!
Ans: We have a lot of ways through which we can handle dynamically changing DML. Some of the methods are,
Ans: Here are some of the air commands in Ab Initio.
Ans: The output can be of the following formats.
Ans: A rollup component is used to group the records based on certain field values. It is a multi-stage transform function which contains functions like initialize, rollup and finalize.
Ans:
Ans: If any mentioned conditions are not met, the force_error it forces an error. It will be useful when you want to stop the execution of a graph if it doesnt meet the set condition. It will send the records to the reject port and error message to the error port.
Ans: We can improve the performance of a graph through the following methods.
Ans: The commonly used components in an Ab Initio graph are,
Ans:
Major companies like American Express, Citi Bank, JP Morgan Chase, Time Warner Cable, Home Depot, Premier, etc., use Ab Initio for their data processing and integration needs. The customers include 20% of Computer Software, 10% of Information Technology and Services, 9% of Higher Education, 9% of Education Management, etc. It has a market share of 5.12%. The Ab Initio developer and admin job posts are very high in demand. So, prepare well on the basics of Ab Initio, and you will have a high chance of cracking the interview.